WorldWideScience

Sample records for surprisingly large number

  1. Some Surprising Introductory Physics Facts and Numbers

    Science.gov (United States)

    Mallmann, A. James

    2016-01-01

    In the entertainment world, people usually like, and find memorable, novels, short stories, and movies with surprise endings. This suggests that classroom teachers might want to present to their students examples of surprising facts associated with principles of physics. Possible benefits of finding surprising facts about principles of physics are…

  2. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  3. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  4. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  5. Ontological Surprises

    DEFF Research Database (Denmark)

    Leahu, Lucian

    2016-01-01

    a hybrid approach where machine learning algorithms are used to identify objects as well as connections between them; finally, it argues for remaining open to ontological surprises in machine learning as they may enable the crafting of different relations with and through technologies.......This paper investigates how we might rethink design as the technological crafting of human-machine relations in the context of a machine learning technique called neural networks. It analyzes Google’s Inceptionism project, which uses neural networks for image recognition. The surprising output...

  6. Surprise Trips

    DEFF Research Database (Denmark)

    Korn, Matthias; Kawash, Raghid; Andersen, Lisbet Møller

    2010-01-01

    We report on a platform that augments the natural experience of exploration in diverse indoor and outdoor environments. The system builds on the theme of surprises in terms of user expectations and finding points of interest. It utilizes physical icons as representations of users' interests...... and as notification tokens to alert users when they are within proximity of a surprise. To evaluate the concept, we developed mock-ups, a video prototype and conducted a wizard-of-oz user test for a national park in Denmark....

  7. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  8. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  9. Charming surprise

    CERN Multimedia

    Antonella Del Rosso

    2011-01-01

    The CP violation in charm quarks has always been thought to be extremely small. So, looking at particle decays involving matter and antimatter, the LHCb experiment has recently been surprised to observe that things might be different. Theorists are on the case. The study of the physics of the charm quark was not in the initial plans of the LHCb experiment, whose letter “b” stands for “beauty quark”. However, already one year ago, the Collaboration decided to look into a wider spectrum of processes that involve charm quarks among other things. The LHCb trigger allows a lot of these processes to be selected, and, among them, one has recently shown interesting features. Other experiments at b-factories have already performed the same measurement but this is the first time that it has been possible to achieve such high precision, thanks to the huge amount of data provided by the very high luminosity of the LHC. “We have observed the decay modes of the D0, a pa...

  10. Charming surprise

    CERN Multimedia

    Antonella Del Rosso

    2011-01-01

    The CP violation in charm quarks has always been thought to be extremely small. So, looking at particle decays involving matter and antimatter, the LHCb experiment has recently been surprised to observe that things might be different. Theorists are on the case.   The study of the physics of the charm quark was not in the initial plans of the LHCb experiment, whose letter “b” stands for “beauty quark”. However, already one year ago, the Collaboration decided to look into a wider spectrum of processes that involve charm quarks among other things. The LHCb trigger allows a lot of these processes to be selected, and, among them, one has recently shown interesting features. Other experiments at b-factories have already performed the same measurement but this is the first time that it has been possible to achieve such high precision, thanks to the huge amount of data provided by the very high luminosity of the LHC. “We have observed the decay modes of t...

  11. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  12. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  13. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  14. Surprise, Recipes for Surprise, and Social Influence.

    Science.gov (United States)

    Loewenstein, Jeffrey

    2018-02-07

    Surprising people can provide an opening for influencing them. Surprises garner attention, are arousing, are memorable, and can prompt shifts in understanding. Less noted is that, as a result, surprises can serve to persuade others by leading them to shifts in attitudes. Furthermore, because stories, pictures, and music can generate surprises and those can be widely shared, surprise can have broad social influence. People also tend to share surprising items with others, as anyone on social media has discovered. This means that in addition to broadcasting surprising information, surprising items can also spread through networks. The joint result is that surprise not only has individual effects on beliefs and attitudes but also collective effects on the content of culture. Items that generate surprise need not be random or accidental. There are predictable methods or recipes for generating surprise. One such recipe is discussed, the repetition-break plot structure, to explore the psychological and social possibilities of examining surprise. Recipes for surprise offer a useful means for understanding how surprise works and offer prospects for harnessing surprise to a wide array of ends. Copyright © 2017 Cognitive Science Society, Inc.

  15. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  16. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  17. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  18. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  19. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  20. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  1. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  2. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  3. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  4. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  5. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  6. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  7. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  8. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  9. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  10. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  11. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  13. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  14. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  15. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  16. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  17. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  18. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  19. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  20. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  1. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  2. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  3. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  4. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  5. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  6. Surprise... Surprise..., An Empirical Investigation on How Surprise is Connected to Customer Satisfaction

    NARCIS (Netherlands)

    J. Vanhamme (Joëlle)

    2003-01-01

    textabstractThis research investigates the specific influence of the emotion of surprise on customer transaction-specific satisfaction. Four empirical studies-two field studies (a diary study and a cross section survey) and two experiments-were conducted. The results show that surprise positively

  7. Exploration, Novelty, Surprise and Free Energy Minimisation

    Directory of Open Access Journals (Sweden)

    Philipp eSchwartenbeck

    2013-10-01

    Full Text Available This paper reviews recent developments under the free energy principle that introduce a normative perspective on classical economic (utilitarian decision-making based on (active Bayesian inference. It has been suggested that the free energy principle precludes novelty and complexity, because it assumes that biological systems – like ourselves - try to minimise the long-term average of surprise to maintain their homeostasis. However, recent formulations show that minimising surprise leads naturally to concepts such as exploration and novelty bonuses. In this approach, agents infer a policy that minimises surprise by minimising the difference (or relative entropy between likely and desired outcomes, which involves both pursuing the goal-state that has the highest expected utility (often termed ‘exploitation’ and visiting a number of different goal-states (‘exploration’. Crucially, the opportunity to visit new states increases the value of the current state. Casting decision-making problems within a variational framework, therefore, predicts that our behaviour is governed by both the entropy and expected utility of future states. This dissolves any dialectic between minimising surprise and exploration or novelty seeking.

  8. Surprise as a design strategy

    NARCIS (Netherlands)

    Ludden, G.D.S.; Schifferstein, H.N.J.; Hekkert, P.P.M.

    2008-01-01

    Imagine yourself queuing for the cashier’s desk in a supermarket. Naturally, you have picked the wrong line, the one that does not seem to move at all. Soon, you get tired of waiting. Now, how would you feel if the cashier suddenly started to sing? Many of us would be surprised and, regardless of

  9. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  10. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  11. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  12. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  13. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  14. A surprising palmar nevus: A case report

    Directory of Open Access Journals (Sweden)

    Rana Rafiei

    2018-02-01

    Full Text Available Raised palmar or plantar nevus especially in white people is an unusual feature. We present an uncommon palmar compound nevus in a 26-year-old woman with a large diameter (6 mm which had a collaret-shaped margin. In histopathologic evaluation intralymphatic protrusions of nevic nests were noted. This case was surprising to us for these reasons: size, shape, location and histopathology of the lesion. Palmar nevi are usually junctional (flat and below 3 mm diameter and intra lymphatic protrusion or invasion in nevi is an extremely rare phenomenon.

  15. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  16. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  17. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  18. Surprises and counterexamples in real function theory

    CERN Document Server

    Rajwade, A R

    2007-01-01

    This book presents a variety of intriguing, surprising and appealing topics and nonroutine theorems in real function theory. It is a reference book to which one can turn for finding that arise while studying or teaching analysis.Chapter 1 is an introduction to algebraic, irrational and transcendental numbers and contains the Cantor ternary set. Chapter 2 contains functions with extraordinary properties; functions that are continuous at each point but differentiable at no point. Chapters 4 and intermediate value property, periodic functions, Rolle's theorem, Taylor's theorem, points of tangents. Chapter 6 discusses sequences and series. It includes the restricted harmonic series, of alternating harmonic series and some number theoretic aspects. In Chapter 7, the infinite peculiar range of convergence is studied. Appendix I deal with some specialized topics. Exercises at the end of chapters and their solutions are provided in Appendix II.This book will be useful for students and teachers alike.

  19. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  20. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  1. Surprise and Memory as Indices of Concrete Operational Development

    Science.gov (United States)

    Achenbach, Thomas M.

    1973-01-01

    Normal and retarded children's use of color, number, length and continuous quantity as attributes of identification was assessed by presenting them with contrived changes in three properties. Surprise and correct memory responses for color preceded those to number, which preceded logical verbal responses to a conventional number-conservation task.…

  2. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  3. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  4. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  5. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  6. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  7. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  8. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  9. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  10. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  11. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  12. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  13. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  14. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  15. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  16. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  17. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  18. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Experimental results surprise quantum theory

    International Nuclear Information System (INIS)

    White, C.

    1986-01-01

    Interest in results from Darmstadt that positron-electron pairs are created in nuclei with high atomic numbers (in the Z range from 180-188) lies in the occurrence of a quantized positron kinetic energy peak at 300. The results lend substance to the contention of Erich Bagge that the traditionally accepted symmetries in positron-electron emission do not exist and, therefore, there is no need to posit the existence of the neutrino. The search is on for the decay of a previously unknown boson to account for the findings, which also points to the need for a major revision in quantum theory. 1 figure

  20. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  1. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  2. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  3. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  4. The role of surprise in satisfaction judgements

    NARCIS (Netherlands)

    Vanhamme, J.; Snelders, H.M.J.J.

    2001-01-01

    Empirical findings suggest that surprise plays an important role in consumer satisfaction, but there is a lack of theory to explain why this is so. The present paper provides explanations for the process through which positive (negative) surprise might enhance (reduce) consumer satisfaction. First,

  5. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  6. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  7. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  8. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  9. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  10. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  11. Climate Change as a Predictable Surprise

    International Nuclear Information System (INIS)

    Bazerman, M.H.

    2006-01-01

    In this article, I analyze climate change as a 'predictable surprise', an event that leads an organization or nation to react with surprise, despite the fact that the information necessary to anticipate the event and its consequences was available (Bazerman and Watkins, 2004). I then assess the cognitive, organizational, and political reasons why society fails to implement wise strategies to prevent predictable surprises generally and climate change specifically. Finally, I conclude with an outline of a set of response strategies to overcome barriers to change

  12. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  13. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  14. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  15. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  16. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  17. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  18. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  19. A toolkit for detecting technical surprise.

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Michael Wayne; Foehse, Mark C.

    2010-10-01

    The detection of a scientific or technological surprise within a secretive country or institute is very difficult. The ability to detect such surprises would allow analysts to identify the capabilities that could be a military or economic threat to national security. Sandia's current approach utilizing ThreatView has been successful in revealing potential technological surprises. However, as data sets become larger, it becomes critical to use algorithms as filters along with the visualization environments. Our two-year LDRD had two primary goals. First, we developed a tool, a Self-Organizing Map (SOM), to extend ThreatView and improve our understanding of the issues involved in working with textual data sets. Second, we developed a toolkit for detecting indicators of technical surprise in textual data sets. Our toolkit has been successfully used to perform technology assessments for the Science & Technology Intelligence (S&TI) program.

  20. An efficient community detection algorithm using greedy surprise maximization

    International Nuclear Information System (INIS)

    Jiang, Yawen; Jia, Caiyan; Yu, Jian

    2014-01-01

    Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)

  1. X rays and radioactivity: a complete surprise

    International Nuclear Information System (INIS)

    Radvanyi, P.; Bordry, M.

    1995-01-01

    The discoveries of X rays and of radioactivity came as complete experimental surprises; the physicists, at that time, had no previous hint of a possible structure of atoms. It is difficult now, knowing what we know, to replace ourselves in the spirit, astonishment and questioning of these years, between 1895 and 1903. The nature of X rays was soon hypothesized, but the nature of the rays emitted by uranium, polonium and radium was much more difficult to disentangle, as they were a mixture of different types of radiations. The origin of the energy continuously released in radioactivity remained a complete mystery for a few years. The multiplicity of the radioactive substances became soon a difficult matter: what was real and what was induced ? Isotopy was still far ahead. It appeared that some radioactive substances had ''half-lifes'': were they genuine radioactive elements or was it just a transitory phenomenon ? Henri Becquerel (in 1900) and Pierre and Marie Curie (in 1902) hesitated on the correct answer. Only after Ernest Rutherford and Frederick Soddy established that radioactivity was the transmutation of one element into another, could one understand that a solid element transformed into a gaseous element, which in turn transformed itself into a succession of solid radioactive elements. It was only in 1913 - after the discovery of the atomic nucleus -, through precise measurements of X ray spectra, that Henry Moseley showed that the number of electrons of a given atom - and the charge of its nucleus - was equal to its atomic number in the periodic table. (authors)

  2. X rays and radioactivity: a complete surprise

    Energy Technology Data Exchange (ETDEWEB)

    Radvanyi, P. [Laboratoire National Saturne, Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France); Bordry, M. [Institut du Radium, 75 - Paris (France)

    1995-12-31

    The discoveries of X rays and of radioactivity came as complete experimental surprises; the physicists, at that time, had no previous hint of a possible structure of atoms. It is difficult now, knowing what we know, to replace ourselves in the spirit, astonishment and questioning of these years, between 1895 and 1903. The nature of X rays was soon hypothesized, but the nature of the rays emitted by uranium, polonium and radium was much more difficult to disentangle, as they were a mixture of different types of radiations. The origin of the energy continuously released in radioactivity remained a complete mystery for a few years. The multiplicity of the radioactive substances became soon a difficult matter: what was real and what was induced ? Isotopy was still far ahead. It appeared that some radioactive substances had ``half-lifes``: were they genuine radioactive elements or was it just a transitory phenomenon ? Henri Becquerel (in 1900) and Pierre and Marie Curie (in 1902) hesitated on the correct answer. Only after Ernest Rutherford and Frederick Soddy established that radioactivity was the transmutation of one element into another, could one understand that a solid element transformed into a gaseous element, which in turn transformed itself into a succession of solid radioactive elements. It was only in 1913 - after the discovery of the atomic nucleus -, through precise measurements of X ray spectra, that Henry Moseley showed that the number of electrons of a given atom - and the charge of its nucleus - was equal to its atomic number in the periodic table. (authors).

  3. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  4. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  5. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  6. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  7. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  8. Surprisal analysis and probability matrices for rotational energy transfer

    International Nuclear Information System (INIS)

    Levine, R.D.; Bernstein, R.B.; Kahana, P.; Procaccia, I.; Upchurch, E.T.

    1976-01-01

    The information-theoretic approach is applied to the analysis of state-to-state rotational energy transfer cross sections. The rotational surprisal is evaluated in the usual way, in terms of the deviance of the cross sections from their reference (''prior'') values. The surprisal is found to be an essentially linear function of the energy transferred. This behavior accounts for the experimentally observed exponential gap law for the hydrogen halide systems. The data base here analyzed (taken from the literature) is largely computational in origin: quantal calculations for the hydrogenic systems H 2 +H, He, Li + ; HD+He; D 2 +H and for the N 2 +Ar system; and classical trajectory results for H 2 +Li + ; D 2 +Li + and N 2 +Ar. The surprisal analysis not only serves to compact a large body of data but also aids in the interpretation of the results. A single surprisal parameter theta/subR/ suffices to account for the (relative) magnitude of all state-to-state inelastic cross sections at a given energy

  9. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  10. Surprise: a belief or an emotion?

    Science.gov (United States)

    Mellers, Barbara; Fincher, Katrina; Drummond, Caitlin; Bigony, Michelle

    2013-01-01

    Surprise is a fundamental link between cognition and emotion. It is shaped by cognitive assessments of likelihood, intuition, and superstition, and it in turn shapes hedonic experiences. We examine this connection between cognition and emotion and offer an explanation called decision affect theory. Our theory predicts the affective consequences of mistaken beliefs, such as overconfidence and hindsight. It provides insight about why the pleasure of a gain can loom larger than the pain of a comparable loss. Finally, it explains cross-cultural differences in emotional reactions to surprising events. By changing the nature of the unexpected (from chance to good luck), one can alter the emotional reaction to surprising events. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  12. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  13. Viral marketing: the use of surprise

    NARCIS (Netherlands)

    Lindgreen, A.; Vanhamme, J.; Clarke, I.; Flaherty, T.B.

    2005-01-01

    Viral marketing involves consumers passing along a company's marketing message to their friends, family, and colleagues. This chapter reviews viral marketing campaigns and argues that the emotion of surprise often is at work and that this mechanism resembles that of word-of-mouth marketing.

  14. Glial heterotopia of maxilla: A clinical surprise

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Mahalik

    2011-01-01

    Full Text Available Glial heterotopia is a rare congenital mass lesion which often presents as a clinical surprise. We report a case of extranasal glial heterotopia in a neonate with unusual features. The presentation, management strategy, etiopathogenesis and histopathology of the mass lesion has been reviewed.

  15. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  16. A Shocking Surprise in Stephan's Quintet

    Science.gov (United States)

    2006-01-01

    This false-color composite image of the Stephan's Quintet galaxy cluster clearly shows one of the largest shock waves ever seen (green arc). The wave was produced by one galaxy falling toward another at speeds of more than one million miles per hour. The image is made up of data from NASA's Spitzer Space Telescope and a ground-based telescope in Spain. Four of the five galaxies in this picture are involved in a violent collision, which has already stripped most of the hydrogen gas from the interiors of the galaxies. The centers of the galaxies appear as bright yellow-pink knots inside a blue haze of stars, and the galaxy producing all the turmoil, NGC7318b, is the left of two small bright regions in the middle right of the image. One galaxy, the large spiral at the bottom left of the image, is a foreground object and is not associated with the cluster. The titanic shock wave, larger than our own Milky Way galaxy, was detected by the ground-based telescope using visible-light wavelengths. It consists of hot hydrogen gas. As NGC7318b collides with gas spread throughout the cluster, atoms of hydrogen are heated in the shock wave, producing the green glow. Spitzer pointed its infrared spectrograph at the peak of this shock wave (middle of green glow) to learn more about its inner workings. This instrument breaks light apart into its basic components. Data from the instrument are referred to as spectra and are displayed as curving lines that indicate the amount of light coming at each specific wavelength. The Spitzer spectrum showed a strong infrared signature for incredibly turbulent gas made up of hydrogen molecules. This gas is caused when atoms of hydrogen rapidly pair-up to form molecules in the wake of the shock wave. Molecular hydrogen, unlike atomic hydrogen, gives off most of its energy through vibrations that emit in the infrared. This highly disturbed gas is the most turbulent molecular hydrogen ever seen. Astronomers were surprised not only by the turbulence

  17. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  18. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  19. Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers

    KAUST Repository

    Cheng, W.

    2017-11-27

    We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow

  20. Radar Design to Protect Against Surprise

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Technological and doctrinal surprise is about rendering preparations for conflict as irrelevant or ineffective . For a sensor, this means essentially rendering the sensor as irrelevant or ineffective in its ability to help determine truth. Recovery from this sort of surprise is facilitated by flexibility in our own technology and doctrine. For a sensor, this mean s flexibility in its architecture, design, tactics, and the designing organizations ' processes. - 4 - Acknowledgements This report is the result of a n unfunded research and development activity . Sandia National Laboratories is a multi - program laboratory manage d and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.

  1. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  2. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  3. Surprise: Dwarf Galaxy Harbors Supermassive Black Hole

    Science.gov (United States)

    2011-01-01

    The surprising discovery of a supermassive black hole in a small nearby galaxy has given astronomers a tantalizing look at how black holes and galaxies may have grown in the early history of the Universe. Finding a black hole a million times more massive than the Sun in a star-forming dwarf galaxy is a strong indication that supermassive black holes formed before the buildup of galaxies, the astronomers said. The galaxy, called Henize 2-10, 30 million light-years from Earth, has been studied for years, and is forming stars very rapidly. Irregularly shaped and about 3,000 light-years across (compared to 100,000 for our own Milky Way), it resembles what scientists think were some of the first galaxies to form in the early Universe. "This galaxy gives us important clues about a very early phase of galaxy evolution that has not been observed before," said Amy Reines, a Ph.D. candidate at the University of Virginia. Supermassive black holes lie at the cores of all "full-sized" galaxies. In the nearby Universe, there is a direct relationship -- a constant ratio -- between the masses of the black holes and that of the central "bulges" of the galaxies, leading them to conclude that the black holes and bulges affected each others' growth. Two years ago, an international team of astronomers found that black holes in young galaxies in the early Universe were more massive than this ratio would indicate. This, they said, was strong evidence that black holes developed before their surrounding galaxies. "Now, we have found a dwarf galaxy with no bulge at all, yet it has a supermassive black hole. This greatly strengthens the case for the black holes developing first, before the galaxy's bulge is formed," Reines said. Reines, along with Gregory Sivakoff and Kelsey Johnson of the University of Virginia and the National Radio Astronomy Observatory (NRAO), and Crystal Brogan of the NRAO, observed Henize 2-10 with the National Science Foundation's Very Large Array radio telescope and

  4. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  5. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  6. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  7. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  8. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  9. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  10. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  11. Linear optics and projective measurements alone suffice to create large-photon-number path entanglement

    International Nuclear Information System (INIS)

    Lee, Hwang; Kok, Pieter; Dowling, Jonathan P.; Cerf, Nicolas J.

    2002-01-01

    We propose a method for preparing maximal path entanglement with a definite photon-number N, larger than two, using projective measurements. In contrast with the previously known schemes, our method uses only linear optics. Specifically, we exhibit a way of generating four-photon, path-entangled states of the form vertical bar 4,0>+ vertical bar 0,4>, using only four beam splitters and two detectors. These states are of major interest as a resource for quantum interferometric sensors as well as for optical quantum lithography and quantum holography

  12. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  13. Pupil size tracks perceptual content and surprise.

    Science.gov (United States)

    Kloosterman, Niels A; Meindertsma, Thomas; van Loon, Anouk M; Lamme, Victor A F; Bonneh, Yoram S; Donner, Tobias H

    2015-04-01

    Changes in pupil size at constant light levels reflect the activity of neuromodulatory brainstem centers that control global brain state. These endogenously driven pupil dynamics can be synchronized with cognitive acts. For example, the pupil dilates during the spontaneous switches of perception of a constant sensory input in bistable perceptual illusions. It is unknown whether this pupil dilation only indicates the occurrence of perceptual switches, or also their content. Here, we measured pupil diameter in human subjects reporting the subjective disappearance and re-appearance of a physically constant visual target surrounded by a moving pattern ('motion-induced blindness' illusion). We show that the pupil dilates during the perceptual switches in the illusion and a stimulus-evoked 'replay' of that illusion. Critically, the switch-related pupil dilation encodes perceptual content, with larger amplitude for disappearance than re-appearance. This difference in pupil response amplitude enables prediction of the type of report (disappearance vs. re-appearance) on individual switches (receiver-operating characteristic: 61%). The amplitude difference is independent of the relative durations of target-visible and target-invisible intervals and subjects' overt behavioral report of the perceptual switches. Further, we show that pupil dilation during the replay also scales with the level of surprise about the timing of switches, but there is no evidence for an interaction between the effects of surprise and perceptual content on the pupil response. Taken together, our results suggest that pupil-linked brain systems track both the content of, and surprise about, perceptual events. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  15. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  17. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  18. Application of Evolution Strategies to the Design of Tracking Filters with a Large Number of Specifications

    Directory of Open Access Journals (Sweden)

    Jesús García Herrero

    2003-07-01

    Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.

  19. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  20. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  1. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  2. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  3. Old Star's "Rebirth" Gives Astronomers Surprises

    Science.gov (United States)

    2005-04-01

    Astronomers using the National Science Foundation's Very Large Array (VLA) radio telescope are taking advantage of a once-in-a-lifetime opportunity to watch an old star suddenly stir back into new activity after coming to the end of its normal life. Their surprising results have forced them to change their ideas of how such an old, white dwarf star can re-ignite its nuclear furnace for one final blast of energy. Sakurai's Object Radio/Optical Images of Sakurai's Object: Color image shows nebula ejected thousands of years ago. Contours indicate radio emission. Inset is Hubble Space Telescope image, with contours indicating radio emission; this inset shows just the central part of the region. CREDIT: Hajduk et al., NRAO/AUI/NSF, ESO, StSci, NASA Computer simulations had predicted a series of events that would follow such a re-ignition of fusion reactions, but the star didn't follow the script -- events moved 100 times more quickly than the simulations predicted. "We've now produced a new theoretical model of how this process works, and the VLA observations have provided the first evidence supporting our new model," said Albert Zijlstra, of the University of Manchester in the United Kingdom. Zijlstra and his colleagues presented their findings in the April 8 issue of the journal Science. The astronomers studied a star known as V4334 Sgr, in the constellation Sagittarius. It is better known as "Sakurai's Object," after Japanese amateur astronomer Yukio Sakurai, who discovered it on February 20, 1996, when it suddenly burst into new brightness. At first, astronomers thought the outburst was a common nova explosion, but further study showed that Sakurai's Object was anything but common. The star is an old white dwarf that had run out of hydrogen fuel for nuclear fusion reactions in its core. Astronomers believe that some such stars can undergo a final burst of fusion in a shell of helium that surrounds a core of heavier nuclei such as carbon and oxygen. However, the

  4. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  5. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  6. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  7. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  8. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  9. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  10. KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    ENEA CONSTANTA

    2016-06-01

    Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism

  11. Secondary organic aerosol formation from a large number of reactive man-made organic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)

    2010-07-15

    A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.

  12. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  13. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  14. Normal zone detectors for a large number of inductively coupled coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed

  15. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  16. Beating the numbers through strategic intervention materials (SIMs): Innovative science teaching for large classes

    Science.gov (United States)

    Alboruto, Venus M.

    2017-05-01

    The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.

  17. Detection of large numbers of novel sequences in the metatranscriptomes of complex marine microbial communities.

    Science.gov (United States)

    Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian

    2008-08-22

    Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.

  18. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  19. What caused a large number of fatalities in the Tohoku earthquake?

    Science.gov (United States)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  20. Primary Care Practice: Uncertainty and Surprise

    Science.gov (United States)

    Crabtree, Benjamin F.

    I will focus my comments on uncertainty and surprise in primary care practices. I am a medical anthropologist by training, and have been a full-time researcher in family medicine for close to twenty years. In this talk I want to look at primary care practices as complex systems, particularly taking the perspective of translating evidence into practice. I am going to discuss briefly the challenges we have in primary care, and in medicine in general, of translating new evidence into the everyday care of patients. To do this, I will look at two studies that we have conducted on family practices, then think about how practices can be best characterized as complex adaptive systems. Finally, I will focus on the implications of this portrayal for disseminating new knowledge into practice.

  1. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  2. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. The conceptualization model problem—surprise

    Science.gov (United States)

    Bredehoeft, John

    2005-03-01

    The foundation of model analysis is the conceptual model. Surprise is defined as new data that renders the prevailing conceptual model invalid; as defined here it represents a paradigm shift. Limited empirical data indicate that surprises occur in 20-30% of model analyses. These data suggest that groundwater analysts have difficulty selecting the appropriate conceptual model. There is no ready remedy to the conceptual model problem other than (1) to collect as much data as is feasible, using all applicable methods—a complementary data collection methodology can lead to new information that changes the prevailing conceptual model, and (2) for the analyst to remain open to the fact that the conceptual model can change dramatically as more information is collected. In the final analysis, the hydrogeologist makes a subjective decision on the appropriate conceptual model. The conceptualization problem does not render models unusable. The problem introduces an uncertainty that often is not widely recognized. Conceptual model uncertainty is exacerbated in making long-term predictions of system performance. C'est le modèle conceptuel qui se trouve à base d'une analyse sur un modèle. On considère comme une surprise lorsque le modèle est invalidé par des données nouvelles; dans les termes définis ici la surprise est équivalente à un change de paradigme. Des données empiriques limitées indiquent que les surprises apparaissent dans 20 à 30% des analyses effectuées sur les modèles. Ces données suggèrent que l'analyse des eaux souterraines présente des difficultés lorsqu'il s'agit de choisir le modèle conceptuel approprié. Il n'existe pas un autre remède au problème du modèle conceptuel que: (1) rassembler autant des données que possible en utilisant toutes les méthodes applicables—la méthode des données complémentaires peut conduire aux nouvelles informations qui vont changer le modèle conceptuel, et (2) l'analyste doit rester ouvert au fait

  4. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  5. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  6. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  7. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  8. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  9. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  10. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  11. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  12. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  13. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  14. Equilibrium deuterium isotope effect of surprising magnitude

    International Nuclear Information System (INIS)

    Goldstein, M.J.; Pressman, E.J.

    1981-01-01

    Seemingly large deuterium isotope effects are reported for the preference of deuterium for the α-chloro site to the bridgehead or to the vinyl site in samples of anti-7-chlorobicyclo[4.3.2]undecatetraene-d 1 . Studies of molecular models did not provide a basis for these large equilibrium deuterium isotope effects. The possibility is proposed that these isotope effects only appear to be large for want of comparison with isotope effects measured for molecules that might provide even greater contrasts in local force fields

  15. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  16. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  17. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  18. Surprises in the suddenly-expanded infinite well

    International Nuclear Information System (INIS)

    Aslangul, Claude

    2008-01-01

    I study the time evolution of a particle prepared in the ground state of an infinite well after the latter is suddenly expanded. It turns out that the probability density |Ψ(x, t)| 2 shows up quite a surprising behaviour: for definite times, plateaux appear for which |Ψ(x, t)| 2 is constant on finite intervals for x. Elements of theoretical explanation are given by analysing the singular component of the second derivative ∂ xx Ψ(x, t). Analytical closed expressions are obtained for some specific times, which easily allow us to show that, at these times, the density organizes itself into regular patterns provided the size of the box is large enough; more, above some critical size depending on the specific time, the density patterns are independent of the expansion parameter. It is seen how the density at these times simply results from a construction game with definite rules acting on the pieces of the initial density

  19. The Value of Change: Surprises and Insights in Stellar Evolution

    Science.gov (United States)

    Bildsten, Lars

    2018-01-01

    Astronomers with large-format cameras regularly scan the sky many times per night to detect what's changing, and telescopes in space such as Kepler and, soon, TESS obtain very accurate brightness measurements of nearly a million stars over time periods of years. These capabilities, in conjunction with theoretical and computational efforts, have yielded surprises and remarkable new insights into the internal properties of stars and how they end their lives. I will show how asteroseismology reveals the properties of the deep interiors of red giants, and highlight how astrophysical transients may be revealing unusual thermonuclear outcomes from exploding white dwarfs and the births of highly magnetic neutron stars. All the while, stellar science has been accelerated by the availability of open source tools, such as Modules for Experiments in Stellar Astrophysics (MESA), and the nearly immediate availability of observational results.

  20. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  1. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  2. Fundamental surprise in the application of airpower

    Science.gov (United States)

    2017-05-25

    explain transformations in scientific research proposed by Thomas Kuhn in his book “The Structure of Scientific Revolutions." Kuhn proposed the idea...that the accepted traditions of scientific research within a particular community, known as a paradigm, provide the tools to perform "normal science...the large-scale attacks on Lebanese infrastructure would have on the regime, this concept was a non- starter . The order issued to the IDF on 12 July

  3. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  4. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  5. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  6. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  7. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  8. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  9. The Influence of Negative Surprise on Hedonic Adaptation

    Directory of Open Access Journals (Sweden)

    Ana Paula Kieling

    2016-01-01

    Full Text Available After some time using a product or service, the consumer tends to feel less pleasure with consumption. This reduction of pleasure is known as hedonic adaptation. One of the emotions that interfere in this process is surprise. Based on two experiments, we suggest that negative surprise – differently to positive – influences with the level of pleasure foreseen and experienced by the consumer. Study 1 analyzes the influence of negative (vs. positive surprise on the consumer’s post-purchase hedonic adaptation expectation. Results showed that negative surprise influences the intensity of adaptation, augmenting its strength. Study 2 verifies the influence of negative (vs positive surprise over hedonic adaptation. The findings suggested that negative surprise makes adaptation happen more intensively and faster as time goes by, which brings consequences to companies and consumers in the post-purchase process, such as satisfaction and loyalty.

  10. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  11. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  12. A Dichotomic Analysis of the Surprise Examination Paradox

    OpenAIRE

    Franceschi, Paul

    2002-01-01

    This paper presents a dichotomic analysis of the surprise examination paradox. In section 1, I analyse the surprise notion in detail. I introduce then in section 2, the distinction between a monist and dichotomic analysis of the paradox. I also present there a dichotomy leading to distinguish two basically and structurally different versions of the paradox, respectively based on a conjoint and a disjoint definition of the surprise. In section 3, I describe the solution to SEP corresponding to...

  13. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  14. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  15. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  16. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  17. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  18. Stars Form Surprisingly Close to Milky Way's Black Hole

    Science.gov (United States)

    2005-10-01

    million low mass, sun-like stars in and around the ring, whereas in the disk model, the number of low mass stars could be much less. Nayakshin and his coauthor, Rashid Sunyaev of the Max Plank Institute for Physics in Garching, Germany, used Chandra observations to compare the X-ray glow from the region around Sgr A* to the X-ray emission from thousands of young stars in the Orion Nebula star cluster. They found that the Sgr A* star cluster contains only about 10,000 low mass stars, thereby ruling out the migration model. "We can now say that the stars around Sgr A* were not deposited there by some passing star cluster, rather they were born there," said Sunyaev . "There have been theories that this was possible, but this is the first real evidence. Many scientists are going to be very surprised by these results." Because the Galactic Center is shrouded in dust and gas, it has not been possible to look for the low-mass stars in optical observations. In contrast, X-ray data have allowed astronomers to penetrate the veil of gas and dust and look for these low mass stars. Scenario Dismissed by Chandra Results Scenario Dismissed by Chandra Results "In one of the most inhospitable places in our Galaxy, stars have prevailed," said Nayakshin. "It appears that star formation is much more tenacious than we previously believed." The results suggest that the "rules" of star formation change when stars form in the disk of a giant black hole. Because this environment is very different from typical star formation regions, there is a change in the proportion of stars that form. For example, there is a much higher percentage of massive stars in the disks around black holes. And, when these massive stars explode as supernovae, they will "fertilize" the region with heavy elements such as oxygen. This may explain the large amounts of such elements observed in the disks of young supermassive black holes. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for

  19. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  20. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  1. Older Galaxy Pair Has Surprisingly Youthful Glow

    Science.gov (United States)

    2007-01-01

    [figure removed for brevity, see original site] Poster Version A pair of interacting galaxies might be experiencing the galactic equivalent of a mid-life crisis. For some reason, the pair, called Arp 82, didn't make their stars early on as is typical of most galaxies. Instead, they got a second wind later in life -- about 2 billion years ago -- and started pumping out waves of new stars as if they were young again. Arp 82 is an interacting pair of galaxies with a strong bridge and a long tail. NGC 2535 is the big galaxy and NGC 2536 is its smaller companion. The disk of the main galaxy looks like an eye, with a bright 'pupil' in the center and oval-shaped 'eyelids.' Dramatic 'beads on a string' features are visible as chains of evenly spaced star-formation complexes along the eyelids. These are presumably the result of large-scale gaseous shocks from a grazing encounter. The colors of this galaxy indicate that the observed stars are young to intermediate in age, around 2 million to 2 billion years old, much less than the age of the universe (13.7 billion years). The puzzle is: why didn't Arp 82 form many stars earlier, like most galaxies of that mass range? Scientifically, it is an oddball and provides a relatively nearby lab for studying the age of intermediate-mass galaxies. This picture is a composite captured by Spitzer's infrared array camera with light at wavelength 8 microns shown in red, NASA's Galaxy Evolution Explorer combined 1530 and 2310 Angstroms shown in blue, and the Southeastern Association for Research in Astronomy Observatory light at 6940 Angstroms shown in green.

  2. 'Surprise': Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012.

    Science.gov (United States)

    Parry, Amy; Fearnley, Emily; Denehy, Emma

    2012-10-01

    In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness. A guest list was obtained and a retrospective cohort study undertaken. A combination of paper-based and telephone questionnaires were used to collect exposure and outcome information. An environmental investigation was conducted by Food Policy and Programs Branch at the implicated premises. All 57 guests completed the questionnaire (100% response rate), and 15 met the case definition. Analysis showed a significant association between illness and consumption of chicken liver pâté (relative risk: 16.7, 95% confidence interval: 2.4-118.6). No other food or beverage served at the party was associated with illness. Three guests submitted stool samples; all were positive for Campylobacter. The environmental investigation identified that the cooking process used in the preparation of chicken liver pâté may have been inconsistent, resulting in some portions not cooked adequately to inactivate potential Campylobacter contamination. Chicken liver products are a known source of Campylobacter infection; therefore, education of food handlers remains a high priority. To better identify outbreaks among the large number of Campylobacter notifications, routine typing of Campylobacter isolates is recommended.

  3. The Value of Surprising Findings for Research on Marketing

    OpenAIRE

    JS Armstrong

    2004-01-01

    In the work of Armstrong (Journal of Business Research, 2002), I examined empirical research on the scientific process and related these to marketing science. The findings of some studies were surprising. In this reply, I address surprising findings and other issues raised by commentators.

  4. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  5. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  6. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  7. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  8. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  9. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  10. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  11. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  12. Corrugator Activity Confirms Immediate Negative Affect in Surprise

    Directory of Open Access Journals (Sweden)

    Sascha eTopolinski

    2015-02-01

    Full Text Available The emotion of surprise entails a complex of immediate responses, such as cognitive interruption, attention allocation to, and more systematic processing of the surprising stimulus. All these processes serve the ultimate function to increase processing depth and thus cognitively master the surprising stimulus. The present account introduces phasic negative affect as the underlying mechanism responsible for these consequences. Surprising stimuli are schema-discrepant and thus entail cognitive disfluency, which elicits immediate negative affect. This affect in turn works like a phasic cognitive tuning switching the current processing mode from more automatic and heuristic to more systematic and reflective processing. Directly testing the initial elicitation of negative affect by suprising events, the present experiment presented high and low surprising neutral trivia statements to N = 28 participants while assessing their spontaneous facial expressions via facial electromyography. High compared to low suprising trivia elicited higher corrugator activity, indicative of negative affect and mental effort, while leaving zygomaticus (positive affect and frontalis (cultural surprise expression activity unaffected. Future research shall investigate the mediating role of negative affect in eliciting surprise-related outcomes.

  13. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  14. Surprise Gift” Purchases of Small Electric Appliances: A Pilot Study

    NARCIS (Netherlands)

    J. Vanhamme (Joëlle); C.J.P.M. de Bont (Cees)

    2005-01-01

    textabstractUnderstanding decision-making processes for gifts is of strategic importance for companies selling small electrical appliances as gifts account for a large part of their sales. Among all gifts, the ones that are surprising are the most valued by recipients. However, research about

  15. Managing Uncertainity: Soviet Views on Deception, Surprise, and Control

    National Research Council Canada - National Science Library

    Hull, Andrew

    1989-01-01

    .... In the first two cases (deception and surprise), the emphasis is on how the Soviets seek to sow uncertainty in the minds of the enemy and how the Soviets then plan to use that uncertainty to gain military advantage...

  16. Dividend announcements reconsidered: Dividend changes versus dividend surprises

    OpenAIRE

    Andres, Christian; Betzer, André; van den Bongard, Inga; Haesner, Christian; Theissen, Erik

    2012-01-01

    This paper reconsiders the issue of share price reactions to dividend announcements. Previous papers rely almost exclusively on a naive dividend model in which the dividend change is used as a proxy for the dividend surprise. We use the difference between the actual dividend and the analyst consensus forecast as obtained from I/B/E/S as a proxy for the dividend surprise. Using data from Germany, we find significant share price reactions after dividend announcements. Once we control for analys...

  17. The Surprise Examination Paradox and the Second Incompleteness Theorem

    OpenAIRE

    Kritchman, Shira; Raz, Ran

    2010-01-01

    We give a new proof for Godel's second incompleteness theorem, based on Kolmogorov complexity, Chaitin's incompleteness theorem, and an argument that resembles the surprise examination paradox. We then go the other way around and suggest that the second incompleteness theorem gives a possible resolution of the surprise examination paradox. Roughly speaking, we argue that the flaw in the derivation of the paradox is that it contains a hidden assumption that one can prove the consistency of the...

  18. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  19. Chandra Finds Surprising Black Hole Activity In Galaxy Cluster

    Science.gov (United States)

    2002-09-01

    Scientists at the Carnegie Observatories in Pasadena, California, have uncovered six times the expected number of active, supermassive black holes in a single viewing of a cluster of galaxies, a finding that has profound implications for theories as to how old galaxies fuel the growth of their central black holes. The finding suggests that voracious, central black holes might be as common in old, red galaxies as they are in younger, blue galaxies, a surprise to many astronomers. The team made this discovery with NASA'S Chandra X-ray Observatory. They also used Carnegie's 6.5-meter Walter Baade Telescope at the Las Campanas Observatory in Chile for follow-up optical observations. "This changes our view of galaxy clusters as the retirement homes for old and quiet black holes," said Dr. Paul Martini, lead author on a paper describing the results that appears in the September 10 issue of The Astrophysical Journal Letters. "The question now is, how do these black holes produce bright X-ray sources, similar to what we see from much younger galaxies?" Typical of the black hole phenomenon, the cores of these active galaxies are luminous in X-ray radiation. Yet, they are obscured, and thus essentially undetectable in the radio, infrared and optical wavebands. "X rays can penetrate obscuring gas and dust as easily as they penetrate the soft tissue of the human body to look for broken bones," said co-author Dr. Dan Kelson. "So, with Chandra, we can peer through the dust and we have found that even ancient galaxies with 10-billion-year-old stars can have central black holes still actively pulling in copious amounts of interstellar gas. This activity has simply been hidden from us all this time. This means these galaxies aren't over the hill after all and our theories need to be revised." Scientists say that supermassive black holes -- having the mass of millions to billions of suns squeezed into a region about the size of our Solar System -- are the engines in the cores of

  20. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  1. The Ultraviolet Surprise. Efficient Soft X-Ray High Harmonic Generation in Multiply-Ionized Plasmas

    International Nuclear Information System (INIS)

    Popmintchev, Dimitar; Hernandez-Garcia, Carlos; Dollar, Franklin; Mancuso, Christopher; Perez-Hernandez, Jose A.; Chen, Ming-Chang; Hankla, Amelia; Gao, Xiaohui; Shim, Bonggu; Gaeta, Alexander L.; Tarazkar, Maryam; Romanov, Dmitri A.; Levis, Robert J.; Gaffney, Jim A.; Foord, Mark; Libby, Stephen B.; Jaron-Becker, Agnieskzka; Becker, Andreas; Plaja, Luis; Muranane, Margaret M.; Kapteyn, Henry C.; Popmintchev, Tenio

    2015-01-01

    High-harmonic generation is a universal response of matter to strong femtosecond laser fields, coherently upconverting light to much shorter wavelengths. Optimizing the conversion of laser light into soft x-rays typically demands a trade-off between two competing factors. Reduced quantum diffusion of the radiating electron wave function results in emission from each species which is highest when a short-wavelength ultraviolet driving laser is used. But, phase matching - the constructive addition of x-ray waves from a large number of atoms - favors longer-wavelength mid-infrared lasers. We identified a regime of high-harmonic generation driven by 40-cycle ultraviolet lasers in waveguides that can generate bright beams in the soft x-ray region of the spectrum, up to photon energies of 280 electron volts. Surprisingly, the high ultraviolet refractive indices of both neutral atoms and ions enabled effective phase matching, even in a multiply ionized plasma. We observed harmonics with very narrow linewidths, while calculations show that the x-rays emerge as nearly time-bandwidt-limited pulse trains of ~100 attoseconds

  2. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  3. The Limits and Possibilities of International Large-Scale Assessments. Education Policy Brief. Volume 9, Number 2, Spring 2011

    Science.gov (United States)

    Rutkowski, David J.; Prusinski, Ellen L.

    2011-01-01

    The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…

  4. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa

    NARCIS (Netherlands)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for

  5. Investigation into impacts of large numbers of visitors on the collection environment at Our Lord in the Attic

    NARCIS (Netherlands)

    Maekawa, S.; Ankersmit, Bart; Neuhaus, E.; Schellen, H.L.; Beltran, V.; Boersma, F.; Padfield, T.; Borchersen, K.

    2007-01-01

    Our Lord in the Attic is a historic house museum located in the historic center of Amsterdam, The Netherlands. It is a typical 17th century Dutch canal house, with a hidden Church in the attic. The Church was used regularly until 1887 when the house became a museum. The annual total number of

  6. A Few Large Roads or Many Small Ones? How to Accommodate Growth in Vehicle Numbers to Minimise Impacts on Wildlife

    Science.gov (United States)

    Rhodes, Jonathan R.; Lunney, Daniel; Callaghan, John; McAlpine, Clive A.

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity. PMID:24646891

  7. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Directory of Open Access Journals (Sweden)

    Jonathan R Rhodes

    Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  8. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Science.gov (United States)

    Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  9. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae)

    Czech Academy of Sciences Publication Activity Database

    Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.

    2017-01-01

    Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016

  10. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  11. Hungarian Marfan family with large FBN1 deletion calls attention to copy number variation detection in the current NGS era

    Science.gov (United States)

    Ágg, Bence; Meienberg, Janine; Kopps, Anna M.; Fattorini, Nathalie; Stengl, Roland; Daradics, Noémi; Pólos, Miklós; Bors, András; Radovits, Tamás; Merkely, Béla; De Backer, Julie; Szabolcs, Zoltán; Mátyás, Gábor

    2018-01-01

    Copy number variations (CNVs) comprise about 10% of reported disease-causing mutations in Mendelian disorders. Nevertheless, pathogenic CNVs may have been under-detected due to the lack or insufficient use of appropriate detection methods. In this report, on the example of the diagnostic odyssey of a patient with Marfan syndrome (MFS) harboring a hitherto unreported 32-kb FBN1 deletion, we highlight the need for and the feasibility of testing for CNVs (>1 kb) in Mendelian disorders in the current next-generation sequencing (NGS) era. PMID:29850152

  12. Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection at large Rayleigh numbers

    Science.gov (United States)

    Kozitskiy, Sergey

    2018-05-01

    Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.

  13. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  14. Effects of surprisal and locality on Danish sentence processing

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Kizach, Johannes

    2017-01-01

    An eye-tracking experiment in Danish investigates two dominant accounts of sentence processing: locality-based theories that predict a processing advantage for sentences where the distance between the major syntactic heads is minimized, and the surprisal theory which predicts that processing time...

  15. Things may not be as expected: Surprising findings when updating ...

    African Journals Online (AJOL)

    2015-05-14

    May 14, 2015 ... Things may not be as expected: Surprising findings when updating .... (done at the end of three months after the first review month) ..... Allen G. Getting beyond form filling: The role of institutional governance in human research ...

  16. Automation surprise : results of a field survey of Dutch pilots

    NARCIS (Netherlands)

    de Boer, R.J.; Hurts, Karel

    2017-01-01

    Automation surprise (AS) has often been associated with aviation safety incidents. Although numerous laboratory studies have been conducted, few data are available from routine flight operations. A survey among a representative sample of 200 Dutch airline pilots was used to determine the prevalence

  17. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  18. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  19. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  20. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey

  1. Instability and associated roll structure of Marangoni convection in high Prandtl number liquid bridge with large aspect ratio

    Science.gov (United States)

    Yano, T.; Nishino, K.; Kawamura, H.; Ueno, I.; Matsumoto, S.

    2015-02-01

    This paper reports the experimental results on the instability and associated roll structures (RSs) of Marangoni convection in liquid bridges formed under the microgravity environment on the International Space Station. The geometry of interest is high aspect ratio (AR = height/diameter ≥ 1.0) liquid bridges of high Prandtl number fluids (Pr = 67 and 207) suspended between coaxial disks heated differentially. The unsteady flow field and associated RSs were revealed with the three-dimensional particle tracking velocimetry. It is found that the flow field after the onset of instability exhibits oscillations with azimuthal mode number m = 1 and associated RSs traveling in the axial direction. The RSs travel in the same direction as the surface flow (co-flow direction) for 1.00 ≤ AR ≤ 1.25 while they travel in the opposite direction (counter-flow direction) for AR ≥ 1.50, thus showing the change of traveling directions with AR. This traveling direction for AR ≥ 1.50 is reversed to the co-flow direction when the temperature difference between the disks is increased to the condition far beyond the critical one. This change of traveling directions is accompanied by the increase of the oscillation frequency. The characteristics of the RSs for AR ≥ 1.50, such as the azimuthal mode of oscillation, the dimensionless oscillation frequency, and the traveling direction, are in reasonable agreement with those of the previous sounding rocket experiment for AR = 2.50 and those of the linear stability analysis of an infinite liquid bridge.

  2. A LARGE NUMBER OF z > 6 GALAXIES AROUND A QSO AT z = 6.43: EVIDENCE FOR A PROTOCLUSTER?

    International Nuclear Information System (INIS)

    Utsumi, Yousuke; Kashikawa, Nobunari; Miyazaki, Satoshi; Komiyama, Yutaka; Goto, Tomotsugu; Furusawa, Hisanori; Overzier, Roderik

    2010-01-01

    QSOs have been thought to be important for tracing highly biased regions in the early universe, from which the present-day massive galaxies and galaxy clusters formed. While overdensities of star-forming galaxies have been found around QSOs at 2 6 is less clear. Previous studies with the Hubble Space Telescope (HST) have reported the detection of small excesses of faint dropout galaxies in some QSO fields, but these surveys probed a relatively small region surrounding the QSOs. To overcome this problem, we have observed the most distant QSO at z = 6.4 using the large field of view of the Suprime-Cam (34' x 27'). Newly installed red-sensitive fully depleted CCDs allowed us to select Lyman break galaxies (LBGs) at z ∼ 6.4 more efficiently. We found seven LBGs in the QSO field, whereas only one exists in a comparison field. The significance of this apparent excess is difficult to quantify without spectroscopic confirmation and additional control fields. The Poisson probability to find seven objects when one expects four is ∼10%, while the probability to find seven objects in one field and only one in the other is less than 0.4%, suggesting that the QSO field is significantly overdense relative to the control field. These conclusions are supported by a comparison with a cosmological smoothed particle hydrodynamics simulation which includes the higher order clustering of galaxies. We find some evidence that the LBGs are distributed in a ring-like shape centered on the QSO with a radius of ∼3 Mpc. There are no candidate LBGs within 2 Mpc from the QSO, i.e., galaxies are clustered around the QSO but appear to avoid the very center. These results suggest that the QSO is embedded in an overdense region when defined on a sufficiently large scale (i.e., larger than an HST/ACS pointing). This suggests that the QSO was indeed born in a massive halo. The central deficit of galaxies may indicate that (1) the strong UV radiation from the QSO suppressed galaxy formation in

  3. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    Science.gov (United States)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  4. Effect of the Hartmann number on phase separation controlled by magnetic field for binary mixture system with large component ratio

    Science.gov (United States)

    Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng

    2017-11-01

    This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.

  5. Technology interactions among low-carbon energy technologies: What can we learn from a large number of scenarios?

    International Nuclear Information System (INIS)

    McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.

    2011-01-01

    Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is

  6. Sleeping beauties in theoretical physics 26 surprising insights

    CERN Document Server

    Padmanabhan, Thanu

    2015-01-01

    This book addresses a fascinating set of questions in theoretical physics which will both entertain and enlighten all students, teachers and researchers and other physics aficionados. These range from Newtonian mechanics to quantum field theory and cover several puzzling issues that do not appear in standard textbooks. Some topics cover conceptual conundrums, the solutions to which lead to surprising insights; some correct popular misconceptions in the textbook discussion of certain topics; others illustrate deep connections between apparently unconnected domains of theoretical physics; and a few provide remarkably simple derivations of results which are not often appreciated. The connoisseur of theoretical physics will enjoy a feast of pleasant surprises skilfully prepared by an internationally acclaimed theoretical physicist. Each topic is introduced with proper background discussion and special effort is taken to make the discussion self-contained, clear and comprehensible to anyone with an undergraduate e...

  7. The June surprises: balls, strikes, and the fog of war.

    Science.gov (United States)

    Fried, Charles

    2013-04-01

    At first, few constitutional experts took seriously the argument that the Patient Protection and Affordable Care Act exceeded Congress's power under the commerce clause. The highly political opinions of two federal district judges - carefully chosen by challenging plaintiffs - of no particular distinction did not shake that confidence that the act was constitutional. This disdain for the challengers' arguments was only confirmed when the act was upheld by two highly respected conservative court of appeals judges in two separate circuits. But after the hostile, even mocking questioning of the government's advocate in the Supreme Court by the five Republican-appointed justices, the expectation was that the act would indeed be struck down on that ground. So it came as no surprise when the five opined the act did indeed exceed Congress's commerce clause power. But it came as a great surprise when Chief Justice John Roberts, joined by the four Democrat-appointed justices, ruled that the act could be sustained as an exercise of Congress's taxing power - a ground urged by the government almost as an afterthought. It was further surprising, even shocking, that Justices Antonin Scalia, Anthony Kennedy, Clarence Thomas, and Samuel Alito not only wrote a joint opinion on the commerce clause virtually identical to that of their chief, but that in writing it they did not refer to or even acknowledge his opinion. Finally surprising was the fact that Justices Ruth Bader Ginsburg and Stephen Breyer joined the chief in holding that aspects of the act's Medicaid expansion were unconstitutional. This essay ponders and tries to unravel some of these puzzles.

  8. ORMS IN SURPRISING PLACES: CLINICAL AND MORPHOLOGICAL FEATURES

    Directory of Open Access Journals (Sweden)

    Myroshnychenko MS

    2013-06-01

    Full Text Available Helminthes are the most common human diseases, which are characterized by involvement in the pathological process of all organs and systems. In this article, the authors discuss a few cases of typical and atypical localizations for parasitic worms such as filarial and pinworms which were recovered from surprising places in the bodies of patients in Kharkiv region. This article will allow the doctors of practical health care to pay special attention to the timely prevention and diagnostics of this pathology.

  9. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  10. Spatiotemporal neural characterization of prediction error valence and surprise during reward learning in humans.

    Science.gov (United States)

    Fouragnan, Elsa; Queirazza, Filippo; Retzler, Chris; Mullinger, Karen J; Philiastides, Marios G

    2017-07-06

    Reward learning depends on accurate reward associations with potential choices. These associations can be attained with reinforcement learning mechanisms using a reward prediction error (RPE) signal (the difference between actual and expected rewards) for updating future reward expectations. Despite an extensive body of literature on the influence of RPE on learning, little has been done to investigate the potentially separate contributions of RPE valence (positive or negative) and surprise (absolute degree of deviation from expectations). Here, we coupled single-trial electroencephalography with simultaneously acquired fMRI, during a probabilistic reversal-learning task, to offer evidence of temporally overlapping but largely distinct spatial representations of RPE valence and surprise. Electrophysiological variability in RPE valence correlated with activity in regions of the human reward network promoting approach or avoidance learning. Electrophysiological variability in RPE surprise correlated primarily with activity in regions of the human attentional network controlling the speed of learning. Crucially, despite the largely separate spatial extend of these representations our EEG-informed fMRI approach uniquely revealed a linear superposition of the two RPE components in a smaller network encompassing visuo-mnemonic and reward areas. Activity in this network was further predictive of stimulus value updating indicating a comparable contribution of both signals to reward learning.

  11. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    International Nuclear Information System (INIS)

    Hasegawa, K.; Lim, C.S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario

  12. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    Science.gov (United States)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  13. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    OpenAIRE

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  14. Numbers their history and meaning

    CERN Document Server

    Flegg, Graham

    2003-01-01

    Readable, jargon-free book examines the earliest endeavors to count and record numbers, initial attempts to solve problems by using equations, and origins of infinite cardinal arithmetic. "Surprisingly exciting." - Choice.

  15. On the surprising rigidity of the Pauli exclusion principle

    International Nuclear Information System (INIS)

    Greenberg, O.W.

    1989-01-01

    I review recent attempts to construct a local quantum field theory of small violations of the Pauli exclusion principle and suggest a qualitative reason for the surprising rigidity of the Pauli principle. I suggest that small violations can occur in our four-dimensional world as a consequence of the compactification of a higher-dimensional theory in which the exclusion principle is exactly valid. I briefly mention a recent experiment which places a severe limit on possible violations of the exclusion principle. (orig.)

  16. Teacher Supply and Demand: Surprises from Primary Research

    Directory of Open Access Journals (Sweden)

    Andrew J. Wayne

    2000-09-01

    Full Text Available An investigation of primary research studies on public school teacher supply and demand revealed four surprises. Projections show that enrollments are leveling off. Relatedly, annual hiring increases should be only about two or three percent over the next few years. Results from studies of teacher attrition also yield unexpected results. Excluding retirements, only about one in 20 teachers leaves each year, and the novice teachers who quit mainly cite personal and family reasons, not job dissatisfaction. Each of these findings broadens policy makers' options for teacher supply.

  17. Estimations of expectedness and potential surprise in possibility theory

    Science.gov (United States)

    Prade, Henri; Yager, Ronald R.

    1992-01-01

    This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.

  18. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  19. Colour by Numbers

    Science.gov (United States)

    Wetherell, Chris

    2017-01-01

    This is an edited extract from the keynote address given by Dr. Chris Wetherell at the 26th Biennial Conference of the Australian Association of Mathematics Teachers Inc. The author investigates the surprisingly rich structure that exists within a simple arrangement of numbers: the times tables.

  20. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  1. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  2. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  3. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    Directory of Open Access Journals (Sweden)

    Débora Jardim-Messeder

    2017-12-01

    Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal

  4. Enhancement of phase space density by increasing trap anisotropy in a magneto-optical trap with a large number of atoms

    International Nuclear Information System (INIS)

    Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.

    2004-01-01

    The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms

  5. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  6. Cloud Surprises in Moving NASA EOSDIS Applications into Amazon Web Services

    Science.gov (United States)

    Mclaughlin, Brett

    2017-01-01

    NASA ESDIS has been moving a variety of data ingest, distribution, and science data processing applications into a cloud environment over the last 2 years. As expected, there have been a number of challenges in migrating primarily on-premises applications into a cloud-based environment, related to architecture and taking advantage of cloud-based services. What was not expected is a number of issues that were beyond purely technical application re-architectures. We ran into surprising network policy limitations, billing challenges in a government-based cost model, and difficulty in obtaining certificates in an NASA security-compliant manner. On the other hand, this approach has allowed us to move a number of applications from local hosting to the cloud in a matter of hours (yes, hours!!), and our CMR application now services 95% of granule searches and an astonishing 99% of all collection searches in under a second. And most surprising of all, well, you'll just have to wait and see the realization that caught our entire team off guard!

  7. The Surprising Impact of Seat Location on Student Performance

    Science.gov (United States)

    Perkins, Katherine K.; Wieman, Carl E.

    2005-01-01

    Every physics instructor knows that the most engaged and successful students tend to sit at the front of the class and the weakest students tend to sit at the back. However, it is normally assumed that this is merely an indication of the respective seat location preferences of weaker and stronger students. Here we present evidence suggesting that in fact this may be mixing up the cause and effect. It may be that the seat selection itself contributes to whether the student does well or poorly, rather than the other way around. While a number of studies have looked at the effect of seat location on students, the results are often inconclusive, and few, if any, have studied the effects in college classrooms with randomly assigned seats. In this paper, we report on our observations of a large introductory physics course in which we randomly assigned students to particular seat locations at the beginning of the semester. Seat location during the first half of the semester had a noticeable impact on student success in the course, particularly in the top and bottom parts of the grade distribution. Students sitting in the back of the room for the first half of the term were nearly six times as likely to receive an F as students who started in the front of the room. A corresponding but less dramatic reversal was evident in the fractions of students receiving As. These effects were in spite of many unusual efforts to engage students at the back of the class and a front-to-back reversal of seat location halfway through the term. These results suggest there may be inherent detrimental effects of large physics lecture halls that need to be further explored.

  8. Physics Nobel prize 2004: Surprising theory wins physics Nobel

    CERN Multimedia

    2004-01-01

    From left to right: David Politzer, David Gross and Frank Wilczek. For their understanding of counter-intuitive aspects of the strong force, which governs quarks inside protons and neutrons, on 5 October three American physicists were awarded the 2004 Nobel Prize in Physics. David J. Gross (Kavli Institute of Theoretical Physics, University of California, Santa Barbara), H. David Politzer (California Institute of Technology), and Frank Wilczek (Massachusetts Institute of Technology) made a key theoretical discovery with a surprising result: the closer quarks are together, the weaker the force - opposite to what is seen with electromagnetism and gravity. Rather, the strong force is analogous to a rubber band stretching, where the force increases as the quarks get farther apart. These physicists discovered this property of quarks, known as asymptotic freedom, in 1976. It later became a key part of the theory of quantum chromodynamics (QCD) and the Standard Model, the current best theory to describe the interac...

  9. Hepatobiliary fascioliasis in non-endemic zones: a surprise diagnosis.

    Science.gov (United States)

    Jha, Ashish Kumar; Goenka, Mahesh Kumar; Goenka, Usha; Chakrabarti, Amrita

    2013-03-01

    Fascioliasis is a zoonotic infection caused by Fasciola hepatica. Because of population migration and international food trade, human fascioliasis is being an increasingly recognised entity in nonendemic zones. In most parts of Asia, hepatobiliary fascioliasis is sporadic. Human hepatobiliary infection by this trematode has two distinct phases: an acute hepatic phase and a chronic biliary phase. Hepatobiliary infection is mostly associated with intense peripheral eosinophilia. In addition to classically defined hepatic phase and biliary phase fascioliasis, some cases may have an overlap of these two phases. Chronic liver abscess formation is a rare presentation. We describe a surprise case of hepatobiliary fascioliasis who presented to us with liver abscess without intense peripheral eosinophilia, a rare presentation of human fascioliasis especially in non-endemic zones. Copyright © 2013 Arab Journal of Gastroenterology. Published by Elsevier Ltd. All rights reserved.

  10. Exploring the concept of climate surprises. A review of the literature on the concept of surprise and how it is related to climate change

    International Nuclear Information System (INIS)

    Glantz, M.H.; Moore, C.M.; Streets, D.G.; Bhatti, N.; Rosa, C.H.

    1998-01-01

    This report examines the concept of climate surprise and its implications for environmental policymaking. Although most integrated assessment models of climate change deal with average values of change, it is usually the extreme events or surprises that cause the most damage to human health and property. Current models do not help the policymaker decide how to deal with climate surprises. This report examines the literature of surprise in many aspects of human society: psychology, military, health care, humor, agriculture, etc. It draws together various ways to consider the concept of surprise and examines different taxonomies of surprise that have been proposed. In many ways, surprise is revealed to be a subjective concept, triggered by such factors as prior experience, belief system, and level of education. How policymakers have reacted to specific instances of climate change or climate surprise in the past is considered, particularly with regard to the choices they made between proactive and reactive measures. Finally, the report discusses techniques used in the current generation of assessment models and makes suggestions as to how climate surprises might be included in future models. The report concludes that some kinds of surprises are simply unpredictable, but there are several types that could in some way be anticipated and assessed, and their negative effects forestalled

  11. Exploring the concept of climate surprises. A review of the literature on the concept of surprise and how it is related to climate change

    Energy Technology Data Exchange (ETDEWEB)

    Glantz, M.H.; Moore, C.M. [National Center for Atmospheric Research, Boulder, CO (United States); Streets, D.G.; Bhatti, N.; Rosa, C.H. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.; Stewart, T.R. [State Univ. of New York, Albany, NY (United States)

    1998-01-01

    This report examines the concept of climate surprise and its implications for environmental policymaking. Although most integrated assessment models of climate change deal with average values of change, it is usually the extreme events or surprises that cause the most damage to human health and property. Current models do not help the policymaker decide how to deal with climate surprises. This report examines the literature of surprise in many aspects of human society: psychology, military, health care, humor, agriculture, etc. It draws together various ways to consider the concept of surprise and examines different taxonomies of surprise that have been proposed. In many ways, surprise is revealed to be a subjective concept, triggered by such factors as prior experience, belief system, and level of education. How policymakers have reacted to specific instances of climate change or climate surprise in the past is considered, particularly with regard to the choices they made between proactive and reactive measures. Finally, the report discusses techniques used in the current generation of assessment models and makes suggestions as to how climate surprises might be included in future models. The report concludes that some kinds of surprises are simply unpredictable, but there are several types that could in some way be anticipated and assessed, and their negative effects forestalled.

  12. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.

  13. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    International Nuclear Information System (INIS)

    Chiu, J; Ma, L

    2015-01-01

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume

  14. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  15. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  16. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  17. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  18. Aeolian comminution experiments revealing surprising sandball mineral aggregates

    Science.gov (United States)

    Nørnberg, P.; Bak, E.; Finster, K.; Gunnlaugsson, H. P.; Iversen, J. J.; Jensen, S. Knak; Merrison, J. P.

    2014-06-01

    We have undertaken a set of wind erosion experiments on a simple and well defined mineral, quartz. In these experiments wind action is simulated by end over end tumbling of quartz grains in a sealed quartz flask. The tumbling induces collisions among the quartz grains and the walls of the flask. This process simulates wind action impact speed of ∼1.2 m/s. After several months of tumbling we observed the formation of a large number of spherical sand aggregates, which resemble small snowballs under optical microscopy. Upon mechanical load the aggregates are seen to be more elastic than quartz and their mechanical strength is comparable, though slightly lower than that of sintered silica aerogels. Aggregates of this kind have not been reported from field sites or from closed circulation systems. However, sparse occurrence might explain this, or in nature the concentration of the aggregate building particles is so low that they never meet and just appear as the most fine grained tail of the sediment particle size distribution.

  19. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  20. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  1. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  2. Surprising structures hiding in Penrose’s future null infinity

    Science.gov (United States)

    Newman, Ezra T.

    2017-07-01

    Since the late1950s, almost all discussions of asymptotically flat (Einstein-Maxwell) space-times have taken place in the context of Penrose’s null infinity, I+. In addition, almost all calculations have used the Bondi coordinate and tetrad systems. Beginning with a known asymptotically flat solution to the Einstein-Maxwell equations, we show first, that there are other natural coordinate systems, near I+, (analogous to light-cones in flat-space) that are based on (asymptotically) shear-free null geodesic congruences (analogous to the flat-space case). Using these new coordinates and their associated tetrad, we define the complex dipole moment, (the mass dipole plus i times angular momentum), from the l  =  1 harmonic coefficient of a component of the asymptotic Weyl tensor. Second, from this definition, from the Bianchi identities and from the Bondi-Sachs mass and linear momentum, we show that there exists a large number of results—identifications and dynamics—identical to those of classical mechanics and electrodynamics. They include, among many others, {P}=M{v}+..., {L}= {r} × {P} , spin, Newton’s second law with the rocket force term (\\dotM v) and radiation reaction, angular momentum conservation and others. All these relations take place in the rather mysterious H-space rather than in space-time. This leads to the enigma: ‘why do these well known relations of classical mechanics take place in H-space?’ and ‘What is the physical meaning of H-space?’

  3. Atom Surprise: Using Theatre in Primary Science Education

    Science.gov (United States)

    Peleg, Ran; Baram-Tsabari, Ayelet

    2011-10-01

    Early exposure to science may have a lifelong effect on children's attitudes towards science and their motivation to learn science in later life. Out-of-class environments can play a significant role in creating favourable attitudes, while contributing to conceptual learning. Educational science theatre is one form of an out-of-class environment, which has received little research attention. This study aims to describe affective and cognitive learning outcomes of watching such a play and to point to connections between theatrical elements and specific outcomes. "Atom Surprise" is a play portraying several concepts on the topic of matter. A mixed methods approach was adopted to investigate the knowledge and attitudes of children (grades 1-6) from two different school settings who watched the play. Data were gathered using questionnaires and in-depth interviews. Analysis suggested that in both schools children's knowledge on the topic of matter increased after the play with younger children gaining more conceptual knowledge than their older peers. In the public school girls showed greater gains in conceptual knowledge than boys. No significant changes in students' general attitudes towards science were found, however, students demonstrated positive changes towards science learning. Theatrical elements that seemed to be important in children's recollection of the play were the narrative, props and stage effects, and characters. In the children's memory, science was intertwined with the theatrical elements. Nonetheless, children could distinguish well between scientific facts and the fictive narrative.

  4. X-rays from comets - a surprising discovery

    CERN Document Server

    CERN. Geneva

    2000-01-01

    Comets are kilometre-size aggregates of ice and dust, which remained from the formation of the solar system. It was not obvious to expect X-ray emission from such objects. Nevertheless, when comet Hyakutake (C/1996 B2) was observed with the ROSAT X-ray satellite during its close approach to Earth in March 1996, bright X-ray emission from this comet was discovered. This finding triggered a search in archival ROSAT data for comets, which might have accidentally crossed the field of view during observations of unrelated targets. To increase the surprise even more, X-ray emission was detected from four additional comets, which were optically 300 to 30 000 times fainter than Hyakutake. For one of them, comet Arai (C/1991 A2), X-ray emission was even found in data which were taken six weeks before the comet was optically discovered. These findings showed that comets represent a new class of celestial X-ray sources. The subsequent detection of X-ray emission from several other comets in dedicated observations confir...

  5. Self-organizing weights for Internet AS-graphs and surprisingly simple routing metrics

    DEFF Research Database (Denmark)

    Scholz, Jan Carsten; Greiner, Martin

    2011-01-01

    The transport capacity of Internet-like communication networks and hence their efficiency may be improved by a factor of 5–10 through the use of highly optimized routing metrics, as demonstrated previously. The numerical determination of such routing metrics can be computationally demanding...... to an extent that prohibits both investigation of and application to very large networks. In an attempt to find a numerically less expensive way of constructing a metric with a comparable performance increase, we propose a local, self-organizing iteration scheme and find two surprisingly simple and efficient...... metrics. The new metrics have negligible computational cost and result in an approximately 5-fold performance increase, providing distinguished competitiveness with the computationally costly counterparts. They are applicable to very large networks and easy to implement in today's Internet routing...

  6. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  7. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  8. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  9. Beyond surprise : A longitudinal study on the experience of visual-tactual incongruities in products

    NARCIS (Netherlands)

    Ludden, G.D.S.; Schifferstein, H.N.J.; Hekkert, P.

    2012-01-01

    When people encounter products with visual-tactual incongruities, they are likely to be surprised because the product feels different than expected. In this paper, we investigate (1) the relationship between surprise and the overall liking of the products, (2) the emotions associated with surprise,

  10. Surprising Incentive: An Instrument for Promoting Safety Performance of Construction Employees

    Directory of Open Access Journals (Sweden)

    Fakhradin Ghasemi

    2015-09-01

    Conclusion: The results of this study proved that the surprising incentive would improve the employees' safety performance just in the short term because the surprising value of the incentives dwindle over time. For this reason and to maintain the surprising value of the incentive system, the amount and types of incentives need to be evaluated and modified annually or biannually.

  11. The Role of Surprise in Game-Based Learning for Mathematics

    NARCIS (Netherlands)

    Wouters, Pieter; van Oostendorp, Herre; ter Vrugte, Judith; Vandercruysse, Sylke; de Jong, Anthonius J.M.; Elen, Jan; De Gloria, Alessandro; Veltkamp, Remco

    2016-01-01

    In this paper we investigate the potential of surprise on learning with prevocational students in the domain of proportional reasoning. Surprise involves an emotional reaction, but it also serves a cognitive goal as it directs attention to explain why the surprising event occurred and to learn for

  12. Human amygdala response to dynamic facial expressions of positive and negative surprise.

    Science.gov (United States)

    Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David

    2014-02-01

    Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Quiescent Galaxies in the 3D-HST Survey: Spectroscopic Confirmation of a Large Number of Galaxies with Relatively Old Stellar Populations at z ~ 2

    Science.gov (United States)

    Whitaker, Katherine E.; van Dokkum, Pieter G.; Brammer, Gabriel; Momcheva, Ivelina G.; Skelton, Rosalind; Franx, Marijn; Kriek, Mariska; Labbé, Ivo; Fumagalli, Mattia; Lundgren, Britt F.; Nelson, Erica J.; Patel, Shannon G.; Rix, Hans-Walter

    2013-06-01

    Quiescent galaxies at z ~ 2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically quiescent galaxies at 1.4 < z < 2.2 from the 3D-HST grism survey. In addition to Hβ (λ4861 Å), we unambiguously identify metal absorption lines in the stacked spectrum, including the G band (λ4304 Å), Mg I (λ5175 Å), and Na I (λ5894 Å). This finding demonstrates that galaxies with relatively old stellar populations already existed when the universe was ~3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3^{+0.1}_{-0.3} Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: the reddest 80% of galaxies are dominated by metal lines and have a relatively old mean age of 1.6^{+0.5}_{-0.4} Gyr, whereas the bluest (and brightest) galaxies have strong Balmer lines and a spectroscopic age of 0.9^{+0.2}_{-0.1} Gyr. Although the spectrum is dominated by an evolved stellar population, we also find [O III] and Hβ emission. Interestingly, this emission is more centrally concentrated than the continuum with {L_{{O}\\,\\scriptsize{III}}}=1.7+/- 0.3\\times 10^{40} erg s-1, indicating residual central star formation or nuclear activity.

  14. Cloud Surprises Discovered in Moving NASA EOSDIS Applications into Amazon Web Services… and #6 Will Shock You!

    Science.gov (United States)

    McLaughlin, B. D.; Pawloski, A. W.

    2017-12-01

    NASA ESDIS has been moving a variety of data ingest, distribution, and science data processing applications into a cloud environment over the last 2 years. As expected, there have been a number of challenges in migrating primarily on-premises applications into a cloud-based environment, related to architecture and taking advantage of cloud-based services. What was not expected is a number of issues that were beyond purely technical application re-architectures. From surprising network policy limitations, billing challenges in a government-based cost model, and obtaining certificates in an NASA security-compliant manner to working with multiple applications in a shared and resource-constrained AWS account, these have been the relevant challenges in taking advantage of a cloud model. And most surprising of all… well, you'll just have to wait and see the "gotcha" that caught our entire team off guard!

  15. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  16. The application of the central limit theorem and the law of large numbers to facial soft tissue depths: T-Table robustness and trends since 2008.

    Science.gov (United States)

    Stephan, Carl N

    2014-03-01

    By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.

  17. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  18. Carbon Dioxide: Surprising Effects on Decision Making and Neurocognitive Performance

    Science.gov (United States)

    James, John T.

    2013-01-01

    The occupants of modern submarines and the International Space Station (ISS) have much in common as far as their air quality is concerned. Air is polluted by materials offgassing, use of utility compounds, leaks of systems chemicals, and anthropogenic sources. The primary anthropogenic compound of concern to submariners and astronauts has been carbon dioxide (CO2). NASA and the US Navy rely on the National Research Council Committee on Toxicology (NRC-COT) to help formulate exposure levels to CO2 that are thought to be safe for exposures of 3-6 months. NASA calls its limits Spacecraft Maximum Allowable Concentrations (SMACs). Years of experience aboard the ISS and a recent publication on deficits in decision making in ground-based subjects exposed briefly to 0.25% CO2 suggest that exposure levels that have been presumed acceptable to preserve health and performance need to be reevaluated. The current CO2 exposure limits for 3-6 months set by NASA and the UK Navy are 0.7%, and the limit for US submariners is 0.5%, although the NRC-COT recommended a 90-day level of 0.8% as safe a few years ago. NASA has set a 1000-day SMAC at 0.5% for exploration-class missions. Anecdotal experience with ISS operations approaching the current 180-day SMAC of 0.7% suggest that this limit is too high. Temporarily, NASA has limited exposures to 0.5% until further peer-reviewed data become available. In the meantime, a study published last year in the journal Environmental Health Perspectives (Satish U, et al. 2012) demonstrated that complexdecision- making performance is somewhat affected at 0.1% CO2 and becomes "dysfunctional" for at least half of the 9 indices of performance at concentrations approaching 0.25% CO2. The investigators used the Strategic Management Simulation (SMS) method of testing for decisionmaking ability, and the results were so surprising to the investigators that they declared that their findings need to be independently confirmed. NASA has responded to the

  19. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    Science.gov (United States)

    Kossobokov, Vladimir

    2013-04-01

    Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a

  20. Segmentation, Diarization and Speech Transcription: Surprise Data Unraveled

    NARCIS (Netherlands)

    Huijbregts, M.A.H.

    2008-01-01

    In this thesis, research on large vocabulary continuous speech recognition for unknown audio conditions is presented. For automatic speech recognition systems based on statistical methods, it is important that the conditions of the audio used for training the statistical models match the conditions

  1. Identification of rare recurrent copy number variants in high-risk autism families and their prevalence in a large ASD population.

    Directory of Open Access Journals (Sweden)

    Nori Matsunami

    Full Text Available Structural variation is thought to play a major etiological role in the development of autism spectrum disorders (ASDs, and numerous studies documenting the relevance of copy number variants (CNVs in ASD have been published since 2006. To determine if large ASD families harbor high-impact CNVs that may have broader impact in the general ASD population, we used the Affymetrix genome-wide human SNP array 6.0 to identify 153 putative autism-specific CNVs present in 55 individuals with ASD from 9 multiplex ASD pedigrees. To evaluate the actual prevalence of these CNVs as well as 185 CNVs reportedly associated with ASD from published studies many of which are insufficiently powered, we designed a custom Illumina array and used it to interrogate these CNVs in 3,000 ASD cases and 6,000 controls. Additional single nucleotide variants (SNVs on the array identified 25 CNVs that we did not detect in our family studies at the standard SNP array resolution. After molecular validation, our results demonstrated that 15 CNVs identified in high-risk ASD families also were found in two or more ASD cases with odds ratios greater than 2.0, strengthening their support as ASD risk variants. In addition, of the 25 CNVs identified using SNV probes on our custom array, 9 also had odds ratios greater than 2.0, suggesting that these CNVs also are ASD risk variants. Eighteen of the validated CNVs have not been reported previously in individuals with ASD and three have only been observed once. Finally, we confirmed the association of 31 of 185 published ASD-associated CNVs in our dataset with odds ratios greater than 2.0, suggesting they may be of clinical relevance in the evaluation of children with ASDs. Taken together, these data provide strong support for the existence and application of high-impact CNVs in the clinical genetic evaluation of children with ASD.

  2. Mitigating Surprise Through Enhanced Operational Design: Civilian Conceptual Planning Models

    Science.gov (United States)

    2007-01-01

    political, historical, cultural , and economic contexts. If we are going to fight among the people, we must understand them.1 When evaluating US military...represents both the Christian and Yoruba minorities as well as the military, continues to compete against rival elites representing disparate elements...undeniably have to learn a new culture , the physical battlespace would not be alien since MS-13 maintains a large presence in areas that are home to

  3. A Neural Mechanism for Surprise-related Interruptions of Visuospatial Working Memory.

    Science.gov (United States)

    Wessel, Jan R

    2018-01-01

    Surprising perceptual events recruit a fronto-basal ganglia mechanism for inhibition, which suppresses motor activity following surprise. A recent study found that this inhibitory mechanism also disrupts the maintenance of verbal working memory (WM) after surprising tones. However, it is unclear whether this same mechanism also relates to surprise-related interruptions of non-verbal WM. We tested this hypothesis using a change-detection task, in which surprising tones impaired visuospatial WM. Participants also performed a stop-signal task (SST). We used independent component analysis and single-trial scalp-electroencephalogram to test whether the same inhibitory mechanism that reflects motor inhibition in the SST relates to surprise-related visuospatial WM decrements, as was the case for verbal WM. As expected, surprising tones elicited activity of the inhibitory mechanism, and this activity correlated strongly with the trial-by-trial level of surprise. However, unlike for verbal WM, the activity of this mechanism was unrelated to visuospatial WM accuracy. Instead, inhibition-independent activity that immediately succeeded the inhibitory mechanism was increased when visuospatial WM was disrupted. This shows that surprise-related interruptions of visuospatial WM are not effected by the same inhibitory mechanism that interrupts verbal WM, and instead provides evidence for a 2-stage model of distraction. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. On the calculation of line strengths, oscillator strengths and lifetimes for very large principal quantum numbers in hydrogenic atoms and ions by the McLean–Watson formula

    International Nuclear Information System (INIS)

    Hey, J D

    2014-01-01

    As a sequel to an earlier study (Hey 2009 J. Phys. B: At. Mol. Opt. Phys. 42 125701), we consider further the application of the line strength formula derived by Watson (2006 J. Phys. B: At. Mol. Opt. Phys. 39 L291) to transitions arising from states of very high principal quantum number in hydrogenic atoms and ions (Rydberg–Rydberg transitions, n > 1000). It is shown how apparent difficulties associated with the use of recurrence relations, derived (Hey 2006 J. Phys. B: At. Mol. Opt. Phys. 39 2641) by the ladder operator technique of Infeld and Hull (1951 Rev. Mod. Phys. 23 21), may be eliminated by a very simple numerical device, whereby this method may readily be applied up to n ≈ 10 000. Beyond this range, programming of the method may entail greater care and complexity. The use of the numerically efficient McLean–Watson formula for such cases is again illustrated by the determination of radiative lifetimes and comparison of present results with those from an asymptotic formula. The question of the influence on the results of the omission or inclusion of fine structure is considered by comparison with calculations based on the standard Condon–Shortley line strength formula. Interest in this work on the radial matrix elements for large n and n′ is related to measurements of radio recombination lines from tenuous space plasmas, e.g. Stepkin et al (2007 Mon. Not. R. Astron. Soc. 374 852), Bell et al (2011 Astrophys. Space Sci. 333 377), to the calculation of electron impact broadening parameters for such spectra (Watson 2006 J. Phys. B: At. Mol. Opt. Phys. 39 1889) and comparison with other theoretical methods (Peach 2014 Adv. Space Res. in press), to the modelling of physical processes in H II regions (Roshi et al 2012 Astrophys. J. 749 49), and the evaluation bound–bound transitions from states of high n during primordial cosmological recombination (Grin and Hirata 2010 Phys. Rev. D 81 083005, Ali-Haïmoud and Hirata 2010 Phys. Rev. D 82 063521

  5. The Surprising History of Claims for Life on the Sun

    Science.gov (United States)

    Crowe, Michael J.

    2011-11-01

    Because astronomers are now convinced that it is impossible for life, especially intelligent life, to exist on the Sun and stars, it might be assumed that astronomers have always held this view. This paper shows that throughout most of the history of astronomy, some intellectuals, including a number of well-known astronomers, have advocated the existence of intelligent life on our Sun and thereby on stars. Among the more prominent figures discussed are Nicolas of Cusa, Giordano Bruno, William Whiston, Johann Bode, Roger Boscovich, William Herschel, Auguste Comte, Carl Gauss, Thomas Dick, John Herschel, and François Arago. One point in preparing this paper is to show differences between the astronomy of the past and that of the present.

  6. Latin America: how a region surprised the experts.

    Science.gov (United States)

    De Sherbinin, A

    1993-02-01

    In 1960-1970, family planning specialists and demographers worried that poverty, limited education, Latin machismo, and strong catholic ideals would obstruct family planning efforts to reduce high fertility in Latin America. It had the highest annual population growth rate in the world (2.8%), which would increase the population 2-fold in 25 years. Yet, the UN's 1992 population projection for Latin America and the Caribbean in the year 2000 was about 20% lower than its 1963 projection (just over 500 vs. 638 million). Since life expectancy increased simultaneously from 57 to 68 years, this reduced projection was caused directly by a large decline in fertility from 5.9 to 3. A regression analysis of 11 Latin American and Caribbean countries revealed that differences in the contraceptive prevalence rates accounted for 90% of the variation in the total fertility rate between countries. Thus, contraception played a key role in the fertility decline. The second most significant determinant of fertility decline was an increase in the average age at first marriage from about 20 to 23 years. Induced abortion and breast feeding did not contribute significantly to fertility decline. The major socioeconomic factors responsible for the decline included economic development and urbanization, resulting in improvements in health care, reduced infant and child mortality, and increases in female literacy, education, and labor force participation. Public and private family planning programs also contributed significantly to the decline. They expanded from cities to remote rural areas, thereby increasing access to contraception. By the early 1990s, Brazil, Mexico, and Colombia had among the lowest levels of unmet need (13-24%) in developing countries. Other key factors of fertility decline were political commitment, strong communication efforts, and stress on quality services. Latin America provides hope to other regions where religion and culture promote a large family size.

  7. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    Science.gov (United States)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  8. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    NARCIS (Netherlands)

    J. Vogt (Julia); K. Bengesser (Kathrin); K.B.M. Claes (Kathleen B.M.); K. Wimmer (Katharina); V.-F. Mautner (Victor-Felix); R. van Minkelen (Rick); E. Legius (Eric); H. Brems (Hilde); M. Upadhyaya (Meena); J. Högel (Josef); C. Lazaro (Conxi); T. Rosenbaum (Thorsten); S. Bammert (Simone); L. Messiaen (Ludwine); D.N. Cooper (David); H. Kehrer-Sawatzki (Hildegard)

    2014-01-01

    textabstractBackground: Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The

  9. Orbital melanocytoma: When a tumor becomes a relieving surprise

    Directory of Open Access Journals (Sweden)

    Haytham E. Nasr

    2018-06-01

    Full Text Available Purpose: Melanocytomas are rare pigmented tumors that arise form melanocytes and have been reported in the central nervous system. Orbital melanocytomas “also known as blue nevus” are rarely reported. The occurrence of choroidal melanoma and orbital melanocytomas has never been described. Observations: This is a case of orbital melanocytoma in a 34 year old female who presented with left proptosis and ecchymosis. She has the right eye enucleated to treat a large choroidal melanoma, 6 years earlier. Orbital metastasis was suspected. After orbital imaging and systemic evaluation, incisional biopsy was planned yet the mass could be totally excised and it turned out to be melanocytoma. The condition was not associated with nevus of Ota and the patient is not known to have any predisposing condition for melanocytic lesions. Conclusion and importance: Melanocytoma and malignant melanoma share the same cell of origin. The benign course, the well differentiated cells, absence of anaplasia and the positive reaction to Human Melanoma Black-45 (HMB-45 and S-100 proteins established the diagnosis of the former. Such diagnosis was a relief for this one eyed patient.(HMB-45:human melanoma black-45. Keywords: Orbit, Melanocytoma, Choroidal melanoma, HMB-45, S-100

  10. Ampullary Mixed Adenoneuroendocrine Carcinoma: Surprise Histology, Familiar Management.

    Science.gov (United States)

    Mahansaria, Shyam Sunder; Agrawal, Nikhil; Arora, Asit; Bihari, Chhagan; Appukuttan, Murali; Chattopadhyay, Tushar Kanti

    2017-10-01

    Mixed adenoneuroendocrine carcinoma (MANEC) has recently been defined by the World Health Organization in 2010. These are rare tumors and MANECs of ampullary region are even rarer. Only 19 cases have been reported in literature. We present 3 cases; the largest series, second case of amphicrine tumor and first case associated with chronic pancreatitis. Retrospective review of 3 patients who were diagnosed to have ampullary MANEC. All 3 patients were diagnosed preoperatively as neuroendocrine carcinoma and underwent margin negative pancreaticoduodenectomy. The histopathology revealed MANECs of small cell, mixed type in 2 patients and large cell, amphicrine type in 1 patient. The neuroendocrine component was grade 3 in all, the tumor was T3 in 2 and T2 in 1 and all had nodal metastases. Two patients received adjuvant chemotherapy and 2 of them had recurrence at 13 and 16 months. The median survival was 15 months. Ampullary MANECs are rare tumors. They are diagnosed on histopathologic examination of the resected specimen. Clinical presentation, management, and prognosis is similar to ampullary adenocarcinoma in literature.

  11. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  12. Distinct medial temporal networks encode surprise during motivation by reward versus punishment

    Science.gov (United States)

    Murty, Vishnu P.; LaBar, Kevin S.; Adcock, R. Alison

    2016-01-01

    Adaptive motivated behavior requires predictive internal representations of the environment, and surprising events are indications for encoding new representations of the environment. The medial temporal lobe memory system, including the hippocampus and surrounding cortex, encodes surprising events and is influenced by motivational state. Because behavior reflects the goals of an individual, we investigated whether motivational valence (i.e., pursuing rewards versus avoiding punishments) also impacts neural and mnemonic encoding of surprising events. During functional magnetic resonance imaging (fMRI), participants encountered perceptually unexpected events either during the pursuit of rewards or avoidance of punishments. Despite similar levels of motivation across groups, reward and punishment facilitated the processing of surprising events in different medial temporal lobe regions. Whereas during reward motivation, perceptual surprises enhanced activation in the hippocampus, during punishment motivation surprises instead enhanced activation in parahippocampal cortex. Further, we found that reward motivation facilitated hippocampal coupling with ventromedial PFC, whereas punishment motivation facilitated parahippocampal cortical coupling with orbitofrontal cortex. Behaviorally, post-scan testing revealed that reward, but not punishment, motivation resulted in greater memory selectivity for surprising events encountered during goal pursuit. Together these findings demonstrate that neuromodulatory systems engaged by anticipation of reward and punishment target separate components of the medial temporal lobe, modulating medial temporal lobe sensitivity and connectivity. Thus, reward and punishment motivation yield distinct neural contexts for learning, with distinct consequences for how surprises are incorporated into predictive mnemonic models of the environment. PMID:26854903

  13. Distinct medial temporal networks encode surprise during motivation by reward versus punishment.

    Science.gov (United States)

    Murty, Vishnu P; LaBar, Kevin S; Adcock, R Alison

    2016-10-01

    Adaptive motivated behavior requires predictive internal representations of the environment, and surprising events are indications for encoding new representations of the environment. The medial temporal lobe memory system, including the hippocampus and surrounding cortex, encodes surprising events and is influenced by motivational state. Because behavior reflects the goals of an individual, we investigated whether motivational valence (i.e., pursuing rewards versus avoiding punishments) also impacts neural and mnemonic encoding of surprising events. During functional magnetic resonance imaging (fMRI), participants encountered perceptually unexpected events either during the pursuit of rewards or avoidance of punishments. Despite similar levels of motivation across groups, reward and punishment facilitated the processing of surprising events in different medial temporal lobe regions. Whereas during reward motivation, perceptual surprises enhanced activation in the hippocampus, during punishment motivation surprises instead enhanced activation in parahippocampal cortex. Further, we found that reward motivation facilitated hippocampal coupling with ventromedial PFC, whereas punishment motivation facilitated parahippocampal cortical coupling with orbitofrontal cortex. Behaviorally, post-scan testing revealed that reward, but not punishment, motivation resulted in greater memory selectivity for surprising events encountered during goal pursuit. Together these findings demonstrate that neuromodulatory systems engaged by anticipation of reward and punishment target separate components of the medial temporal lobe, modulating medial temporal lobe sensitivity and connectivity. Thus, reward and punishment motivation yield distinct neural contexts for learning, with distinct consequences for how surprises are incorporated into predictive mnemonic models of the environment. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. A Numeric Scorecard Assessing the Mental Health Preparedness for Large-Scale Crises at College and University Campuses: A Delphi Study

    Science.gov (United States)

    Burgin, Rick A.

    2012-01-01

    Large-scale crises continue to surprise, overwhelm, and shatter college and university campuses. While the devastation to physical plants and persons is often evident and is addressed with crisis management plans, the number of emotional casualties left in the wake of these large-scale crises may not be apparent and are often not addressed with…

  15. SURPRISINGLY WEAK MAGNETISM ON YOUNG ACCRETING BROWN DWARFS

    International Nuclear Information System (INIS)

    Reiners, A.; Basri, G.; Christensen, U. R.

    2009-01-01

    We have measured the surface magnetic flux on four accreting young brown dwarfs and one nonaccreting young very low mass (VLM) star utilizing high-resolution spectra of absorption lines of the FeH molecule. A magnetic field of 1-2 kG had been proposed for one of the brown dwarfs, Two Micron All Sky Survey (2MASS) J1207334-393254, because of its similarities to higher mass T Tauri stars as manifested in accretion and the presence of a jet. We do not find clear evidence for a kilogauss field in any of our young brown dwarfs but do find a 2 kG field on the young VLM star. Our 3σ upper limit for the magnetic flux in 2MASS J1207334-393254 just reaches 1 kG. We estimate the magnetic field required for accretion in young brown dwarfs given the observed rotations, and find that fields of only a few hundred gauss are sufficient for magnetospheric accretion. This predicted value is less than our observed upper limit. We conclude that magnetic fields in young brown dwarfs are a factor of 5 or more lower than in young stars of about one solar mass, and in older stars with spectral types similar to our young brown dwarfs. It is interesting that, during the first few million years, the fields scale down with mass in line with what is needed for magnetospheric accretion, yet no such scaling is observed at later ages within the same effective temperature range. This scaling is opposite to the trend in rotation, with shorter rotation periods for very young accreting brown dwarfs compared with accreting solar-mass objects (and very low Rossby numbers in all cases). We speculate that in young objects a deeper intrinsic connection may exist between magnetospheric accretion and magnetic field strength, or that magnetic field generation in brown dwarfs may be less efficient than in stars. Neither of these currently has an easy physical explanation.

  16. Development of special machines for production of large number of superconducting coils for the spool correctors for the main dipole of LHC

    International Nuclear Information System (INIS)

    Puntambekar, A.M.; Karmarkar, M.G.

    2003-01-01

    Superconducting (Sc) spool correctors of different types namely Sextupole, (MCS) Decapole (MCD) and Octupole (MCO) are incorporated in each of the main dipole of Large Hadron Collider (LHC). In all 2464 MCS and 1232 MCDO magnets are required to equip all 1232 Dipoles of LHC. The coils wound from thin rectangular section Sc wires are the heart of magnet assembly and its performance for the field quality and cold quench training largely depends on the precise and robust construction of these coils. Under DAE-CERN collaboration CAT was entrusted with the responsibility of making these magnets for LHC. Starting with development of manual fixtures and prototyping using soldering, a more advances special Automatic Coils Winding and Ultrasonic Welding (USW) system for production of large no. of coils and magnets were built at CAT. The paper briefly describes the various developments in this area. (author)

  17. Experimental observations of electron-backscatter effects from high-atomic-number anodes in large-aspect-ratio, electron-beam diodes

    Energy Technology Data Exchange (ETDEWEB)

    Cooperstein, G; Mosher, D; Stephanakis, S J; Weber, B V; Young, F C [Naval Research Laboratory, Washington, DC (United States); Swanekamp, S B [JAYCOR, Vienna, VA (United States)

    1997-12-31

    Backscattered electrons from anodes with high-atomic-number substrates cause early-time anode-plasma formation from the surface layer leading to faster, more intense electron beam pinching, and lower diode impedance. A simple derivation of Child-Langmuir current from a thin hollow cathode shows the same dependence on the diode aspect ratio as critical current. Using this fact, it is shown that the diode voltage and current follow relativistic Child-Langmuir theory until the anode plasma is formed, and then follows critical current after the beam pinches. With thin hollow cathodes, electron beam pinching can be suppressed at low voltages (< 800 kV) even for high currents and high-atomic-number anodes. Electron beam pinching can also be suppressed at high voltages for low-atomic-number anodes as long as the electron current densities remain below the plasma turn-on threshold. (author). 8 figs., 2 refs.

  18. Large-scale studies of the HphI insulin gene variable-number-of-tandem-repeats polymorphism in relation to Type 2 diabetes mellitus and insulin release

    DEFF Research Database (Denmark)

    Hansen, S K; Gjesing, A P; Rasmussen, S K

    2004-01-01

    The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose-induced ins......The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose...

  19. Planning Alternative Organizational Frameworks For a Large Scale Educational Telecommunications System Served by Fixed/Broadcast Satellites. Memorandum Number 73/3.

    Science.gov (United States)

    Walkmeyer, John

    Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…

  20. The number of extranodal sites assessed by PET/CT scan is a powerful predictor of CNS relapse for patients with diffuse large B-cell lymphoma

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; Villa, Diego; Michaelsen, Thomas Yssing

    2017-01-01

    Purpose Development of secondary central nervous system involvement (SCNS) in patients with diffuse large B-cell lymphoma is associated with poor outcomes. The CNS International Prognostic Index (CNS-IPI) has been proposed for identifying patients at greatest risk, but the optimal model is unknow...

  1. Handling large numbers of observation units in three-way methods for the analysis of qualitative and quantitative two-way data

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Marchetti, G.M.

    1994-01-01

    Recently, a number of methods have been proposed for the exploratory analysis of mixtures of qualitative and quantitative variables. In these methods for each variable an object by object similarity matrix is constructed, and these are consequently analyzed by means of three-way methods like

  2. A large increase of sour taste receptor cells in Skn-1-deficient mice does not alter the number of their sour taste signal-transmitting gustatory neurons.

    Science.gov (United States)

    Maeda, Naohiro; Narukawa, Masataka; Ishimaru, Yoshiro; Yamamoto, Kurumi; Misaka, Takumi; Abe, Keiko

    2017-05-01

    The connections between taste receptor cells (TRCs) and innervating gustatory neurons are formed in a mutually dependent manner during development. To investigate whether a change in the ratio of cell types that compose taste buds influences the number of innervating gustatory neurons, we analyzed the proportion of gustatory neurons that transmit sour taste signals in adult Skn-1a -/- mice in which the number of sour TRCs is greatly increased. We generated polycystic kidney disease 1 like 3-wheat germ agglutinin (pkd1l3-WGA)/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice by crossing Skn-1a -/- mice and pkd1l3-WGA transgenic mice, in which neural pathways of sour taste signals can be visualized. The number of WGA-positive cells in the circumvallate papillae is 3-fold higher in taste buds of pkd1l3-WGA/Skn-1a -/- mice relative to pkd1l3-WGA/Skn-1a +/+ mice. Intriguingly, the ratio of WGA-positive neurons to P2X 2 -expressing gustatory neurons in nodose/petrosal ganglia was similar between pkd1l3-WGA/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice. In conclusion, an alteration in the ratio of cell types that compose taste buds does not influence the number of gustatory neurons that transmit sour taste signals. Copyright © 2017. Published by Elsevier B.V.

  3. A Contrast-Based Computational Model of Surprise and Its Applications.

    Science.gov (United States)

    Macedo, Luis; Cardoso, Amílcar

    2017-11-19

    We review our work on a contrast-based computational model of surprise and its applications. The review is contextualized within related research from psychology, philosophy, and particularly artificial intelligence. Influenced by psychological theories of surprise, the model assumes that surprise-eliciting events initiate a series of cognitive processes that begin with the appraisal of the event as unexpected, continue with the interruption of ongoing activity and the focusing of attention on the unexpected event, and culminate in the analysis and evaluation of the event and the revision of beliefs. It is assumed that the intensity of surprise elicited by an event is a nonlinear function of the difference or contrast between the subjective probability of the event and that of the most probable alternative event (which is usually the expected event); and that the agent's behavior is partly controlled by actual and anticipated surprise. We describe applications of artificial agents that incorporate the proposed surprise model in three domains: the exploration of unknown environments, creativity, and intelligent transportation systems. These applications demonstrate the importance of surprise for decision making, active learning, creative reasoning, and selective attention. Copyright © 2017 Cognitive Science Society, Inc.

  4. A Statistical Analysis of the Relationship between Harmonic Surprise and Preference in Popular Music.

    Science.gov (United States)

    Miles, Scott A; Rosen, David S; Grzywacz, Norberto M

    2017-01-01

    Studies have shown that some musical pieces may preferentially activate reward centers in the brain. Less is known, however, about the structural aspects of music that are associated with this activation. Based on the music cognition literature, we propose two hypotheses for why some musical pieces are preferred over others. The first, the Absolute-Surprise Hypothesis, states that unexpected events in music directly lead to pleasure. The second, the Contrastive-Surprise Hypothesis, proposes that the juxtaposition of unexpected events and subsequent expected events leads to an overall rewarding response. We tested these hypotheses within the framework of information theory, using the measure of "surprise." This information-theoretic variable mathematically describes how improbable an event is given a known distribution. We performed a statistical investigation of surprise in the harmonic structure of songs within a representative corpus of Western popular music, namely, the McGill Billboard Project corpus. We found that chords of songs in the top quartile of the Billboard chart showed greater average surprise than those in the bottom quartile. We also found that the different sections within top-quartile songs varied more in their average surprise than the sections within bottom-quartile songs. The results of this study are consistent with both the Absolute- and Contrastive-Surprise Hypotheses. Although these hypotheses seem contradictory to one another, we cannot yet discard the possibility that both absolute and contrastive types of surprise play roles in the enjoyment of popular music. We call this possibility the Hybrid-Surprise Hypothesis. The results of this statistical investigation have implications for both music cognition and the human neural mechanisms of esthetic judgments.

  5. Forty-Five-Year Mortality Rate as a Function of the Number and Type of Psychiatric Diagnoses Found in a Large Danish Birth Cohort

    DEFF Research Database (Denmark)

    Madarasz, Wendy; Manzardo, Ann; Mortensen, Erik Lykke

    2012-01-01

    Central Psychiatric Research Registry for 8109 birth cohort members aged 45 years. Lifetime psychiatric diagnoses (International Classification of Diseases, Revision 10, group F codes, Mental and Behavioural Disorders, and one Z code) for identified subjects were organized into 14 mutually exclusive......Objective: Psychiatric comorbidities are common among psychiatric patients and typically associated with poorer clinical prognoses. Subjects of a large Danish birth cohort were used to study the relation between mortality and co-occurring psychiatric diagnoses. Method: We searched the Danish...

  6. Load Frequency Control by use of a Number of Both Heat Pump Water Heaters and Electric Vehicles in Power System with a Large Integration of Renewable Energy Sources

    Science.gov (United States)

    Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko

    In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.

  7. Surprises from a Deep ASCA Spectrum of the Broad Absorption Line Quasar PHL 5200

    Science.gov (United States)

    Mathur, Smita; Matt, G.; Green, P. J.; Elvis, M.; Singh, K. P.

    2002-01-01

    We present a deep (approx. 85 ks) ASCA observation of the prototype broad absorption line quasar (BALQSO) PHL 5200. This is the best X-ray spectrum of a BALQSO yet. We find the following: (1) The source is not intrinsically X-ray weak. (2) The line-of-sight absorption is very strong, with N(sub H) = 5 x 10(exp 23)/sq cm. (3) The absorber does not cover the source completely; the covering fraction is approx. 90%. This is consistent with the large optical polarization observed in this source, implying multiple lines of sight. The most surprising result of this observation is that (4) the spectrum of this BALQSO is not exactly similar to other radio-quiet quasars. The hard X-ray spectrum of PHL 5200 is steep, with the power-law spectral index alpha approx. 1.5. This is similar to the steepest hard X-ray slopes observed so far. At low redshifts, such steep slopes are observed in narrow-line Seyfert 1 (NLS1) galaxies, believed to be accreting at a high Eddington rate. This observation strengthens the analogy between BALQSOs and NLS1 galaxies and supports the hypothesis that BALQSOs represent an early evolutionary state of quasars. It is well accepted that the orientation to the line of sight determines the appearance of a quasar: age seems to play a significant role as well.

  8. Summit surprises.

    Science.gov (United States)

    Myers, N

    1994-01-01

    A New Delhi Population Summit, organized by the Royal Society, the US National Academy of Sciences, the Royal Swedish Academy of Sciences, and the Indian National Science Academy, was convened with representation of 120 (only 10% women) scientists from 50 countries and about 12 disciplines and 43 national scientific academies. Despite the common assumption that scientists never agree, a 3000 word statement was signed by 50 prominent national figures and supported by 25 professional papers on diverse subjects. The statement proclaimed that stable world population and "prodigious planning efforts" are required for dealing with global social, economic, and environmental problems. The target should be zero population growth by the next generation. The statement, although containing many uncompromising assertions, was not as strong as a statement by the Royal Society and the US National Academy of Sciences released last year: that, in the future, science and technology may not be able to prevent "irreversible degradation of the environment and continued poverty," and that the capacity to sustain life on the planet may be permanently jeopardized. The Delhi statement was backed by professional papers highlighting several important issues. Dr Mahmoud Fathalla of the Rockefeller Foundation claimed that the 500,000 annual maternal deaths worldwide, of which perhaps 33% are due to "coathanger" abortions, are given far less attention than a one-day political event of 500 deaths would receive. Although biologically women have been given a greater survival advantage, which is associated with their reproductive capacity, socially disadvantaged females are relegated to low status. There is poorer nutrition and overall health care for females, female infanticide, and female fetuses are increasingly aborted in China, India, and other countries. The sex ratio in developed countries is 95-97 males to every 100 females, but in developing Asian countries the ratio is 105 males to 100 females. There are reports of 60-100 million missing females. The human species 12,000 years ago had a population of 6 million, a life expectancy of 20 years, and a doubling time of 8000 years; high birth rates were important for preservation of the species. Profertility attitudes are still prevalent today. Insufficient funds go to contraceptive research.

  9. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    Science.gov (United States)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  10. Modelling of natural convection flows with large temperature differences: a benchmark problem for low Mach number solvers. Part. 1 reference solutions

    International Nuclear Information System (INIS)

    Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.

    2005-01-01

    Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)

  11. Prediction of the number of 14 MeV neutron elastically scattered from large sample of aluminium using Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Husin Wagiran; Wan Mohd Nasir Wan Kadir

    1997-01-01

    In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results

  12. Implementation of genomic recursions in single-step genomic best linear unbiased predictor for US Holsteins with a large number of genotyped animals.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J

    2016-03-01

    The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1

  13. Distinct medial temporal networks encode surprise during motivation by reward versus punishment

    OpenAIRE

    Murty, Vishnu P.; LaBar, Kevin S.; Adcock, R. Alison

    2016-01-01

    Adaptive motivated behavior requires predictive internal representations of the environment, and surprising events are indications for encoding new representations of the environment. The medial temporal lobe memory system, including the hippocampus and surrounding cortex, encodes surprising events and is influenced by motivational state. Because behavior reflects the goals of an individual, we investigated whether motivational valence (i.e., pursuing rewards versus avoiding punishments) also...

  14. A modification to linearized theory for prediction of pressure loadings on lifting surfaces at high supersonic Mach numbers and large angles of attack

    Science.gov (United States)

    Carlson, H. W.

    1979-01-01

    A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.

  15. High Frequency Design Considerations for the Large Detector Number and Small Form Factor Dual Electron Spectrometer of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Kujawski, Joseph T.; Gliese, Ulrik B.; Cao, N. T.; Zeuch, M. A.; White, D.; Chornay, D. J; Lobell, J. V.; Avanov, L. A.; Barrie, A. C.; Mariano, A. J.; hide

    2015-01-01

    Each half of the Dual Electron Spectrometer (DES) of the Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission utilizes a microchannel plate Chevron stack feeding 16 separate detection channels each with a dedicated anode and amplifier/discriminator chip. The desire to detect events on a single channel with a temporal spacing of 100 ns and a fixed dead-time drove our decision to use an amplifier/discriminator with a very fast (GHz class) front end. Since the inherent frequency response of each pulse in the output of the DES microchannel plate system also has frequency components above a GHz, this produced a number of design constraints not normally expected in electronic systems operating at peak speeds of 10 MHz. Additional constraints are imposed by the geometry of the instrument requiring all 16 channels along with each anode and amplifier/discriminator to be packaged in a relatively small space. We developed an electrical model for board level interactions between the detector channels to allow us to design a board topology which gave us the best detection sensitivity and lowest channel to channel crosstalk. The amplifier/discriminator output was designed to prevent the outputs from one channel from producing triggers on the inputs of other channels. A number of Radio Frequency design techniques were then applied to prevent signals from other subsystems (e.g. the high voltage power supply, command and data handling board, and Ultraviolet stimulation for the MCP) from generating false events. These techniques enabled us to operate the board at its highest sensitivity when operated in isolation and at very high sensitivity when placed into the overall system.

  16. Large Diversity of Porcine Yersinia enterocolitica 4/O:3 in Eight European Countries Assessed by Multiple-Locus Variable-Number Tandem-Repeat Analysis.

    Science.gov (United States)

    Alakurtti, Sini; Keto-Timonen, Riikka; Virtanen, Sonja; Martínez, Pilar Ortiz; Laukkanen-Ninios, Riikka; Korkeala, Hannu

    2016-06-01

    A total of 253 multiple-locus variable-number tandem-repeat analysis (MLVA) types among 634 isolates were discovered while studying the genetic diversity of porcine Yersinia enterocolitica 4/O:3 isolates from eight different European countries. Six variable-number tandem-repeat (VNTR) loci V2A, V4, V5, V6, V7, and V9 were used to study the isolates from 82 farms in Belgium (n = 93, 7 farms), England (n = 41, 8 farms), Estonia (n = 106, 12 farms), Finland (n = 70, 13 farms), Italy (n = 111, 20 farms), Latvia (n = 66, 3 farms), Russia (n = 60, 10 farms), and Spain (n = 87, 9 farms). Cluster analysis revealed mainly country-specific clusters, and only one MLVA type consisting of two isolates was found from two countries: Russia and Italy. Also, farm-specific clusters were discovered, but same MLVA types could also be found from different farms. Analysis of multiple isolates originating either from the same tonsils (n = 4) or from the same farm, but 6 months apart, revealed both identical and different MLVA types. MLVA showed a very good discriminatory ability with a Simpson's discriminatory index (DI) of 0.989. DIs for VNTR loci V2A, V4, V5, V6, V7, and V9 were 0.916, 0.791, 0.901, 0.877, 0.912, and 0.785, respectively, when studying all isolates together, but variation was evident between isolates originating from different countries. Locus V4 in the Spanish isolates and locus V9 in the Latvian isolates did not differentiate (DI 0.000), and locus V9 in the English isolates showed very low discriminatory power (DI 0.049). The porcine Y. enterocolitica 4/O:3 isolates were diverse, but the variation in DI demonstrates that the well discriminating loci V2A, V5, V6, and V7 should be included in MLVA protocol when maximal discriminatory power is needed.

  17. The One-carbon Carrier Methylofuran from Methylobacterium extorquens AM1 Contains a Large Number of α- and γ-Linked Glutamic Acid Residues*

    Science.gov (United States)

    Hemmann, Jethro L.; Saurel, Olivier; Ochsner, Andrea M.; Stodden, Barbara K.; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A.

    2016-01-01

    Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a 13C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. PMID:26895963

  18. The One-carbon Carrier Methylofuran from Methylobacterium extorquens AM1 Contains a Large Number of α- and γ-Linked Glutamic Acid Residues.

    Science.gov (United States)

    Hemmann, Jethro L; Saurel, Olivier; Ochsner, Andrea M; Stodden, Barbara K; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A

    2016-04-22

    Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a (13)C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  19. Ignorance, Vulnerability and the Occurrence of "Radical Surprises": Theoretical Reflections and Empirical Findings

    Science.gov (United States)

    Kuhlicke, C.

    2009-04-01

    By definition natural disasters always contain a moment of surprise. Their occurrence is mostly unforeseen and unexpected. They hit people unprepared, overwhelm them and expose their helplessness. Yet, there is surprisingly little known on the reasons for their being surprised. Aren't natural disasters expectable and foreseeable after all? Aren't the return rates of most hazards well known and shouldn't people be better prepared? The central question of this presentation is hence: Why do natural disasters so often radically surprise people at all (and how can we explain this being surprised)? In the first part of the presentation, it is argued that most approaches to vulnerability are not able to grasp this moment of surprise. On the contrary, they have their strength in unravelling the expectable: A person who is marginalized or even oppressed in everyday life is also vulnerable during times of crisis and stress, at least this is the central assumption of most vulnerability studies. In the second part, an understanding of vulnerability is developed, which allows taking into account such radical surprises. First, two forms of the unknown are differentiated: An area of the unknown an actor is more or less aware of (ignorance), and an area, which is not even known to be not known (nescience). The discovery of the latter is mostly associated with a "radical surprise", since it is per definition impossible to prepare for it. Second, a definition of vulnerability is proposed, which allows capturing the dynamics of surprises: People are vulnerable when they discover their nescience exceeding by definition previously established routines, stocks of knowledge and resources—in a general sense their capacities—to deal with their physical and/or social environment. This definition explicitly takes the view of different actors serious and departs from their being surprised. In the third part findings of a case study are presented, the 2002 flood in Germany. It is shown

  20. Large Data Set Mining

    NARCIS (Netherlands)

    Leemans, I.B.; Broomhall, Susan

    2017-01-01

    Digital emotion research has yet to make history. Until now large data set mining has not been a very active field of research in early modern emotion studies. This is indeed surprising since first, the early modern field has such rich, copyright-free, digitized data sets and second, emotion studies

  1. Suspect screening of large numbers of emerging contaminants in environmental waters using artificial neural networks for chromatographic retention time prediction and high resolution mass spectrometry data analysis.

    Science.gov (United States)

    Bade, Richard; Bijlsma, Lubertus; Miller, Thomas H; Barron, Leon P; Sancho, Juan Vicente; Hernández, Felix

    2015-12-15

    The recent development of broad-scope high resolution mass spectrometry (HRMS) screening methods has resulted in a much improved capability for new compound identification in environmental samples. However, positive identifications at the ng/L concentration level rely on analytical reference standards for chromatographic retention time (tR) and mass spectral comparisons. Chromatographic tR prediction can play a role in increasing confidence in suspect screening efforts for new compounds in the environment, especially when standards are not available, but reliable methods are lacking. The current work focuses on the development of artificial neural networks (ANNs) for tR prediction in gradient reversed-phase liquid chromatography and applied along with HRMS data to suspect screening of wastewater and environmental surface water samples. Based on a compound tR dataset of >500 compounds, an optimized 4-layer back-propagation multi-layer perceptron model enabled predictions for 85% of all compounds to within 2min of their measured tR for training (n=344) and verification (n=100) datasets. To evaluate the ANN ability for generalization to new data, the model was further tested using 100 randomly selected compounds and revealed 95% prediction accuracy within the 2-minute elution interval. Given the increasing concern on the presence of drug metabolites and other transformation products (TPs) in the aquatic environment, the model was applied along with HRMS data for preliminary identification of pharmaceutically-related compounds in real samples. Examples of compounds where reference standards were subsequently acquired and later confirmed are also presented. To our knowledge, this work presents for the first time, the successful application of an accurate retention time predictor and HRMS data-mining using the largest number of compounds to preliminarily identify new or emerging contaminants in wastewater and surface waters. Copyright © 2015 Elsevier B.V. All rights

  2. Extensive unusual lesions on a large number of immersed human victims found to be from cookiecutter sharks (Isistius spp.): an examination of the Yemenia plane crash.

    Science.gov (United States)

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Schuliar, Yves; Chapenoire, Stéphane; Crema, Enrico R; Claes, Julien; Seret, Bernard; Maleret, Vincent; Morgan, Ruth M

    2017-03-01

    Accurate determination of the origin and timing of trauma is key in medicolegal investigations when the cause and manner of death are unknown. However, distinction between criminal and accidental perimortem trauma and postmortem modifications can be challenging when facing unidentified trauma. Postmortem examination of the immersed victims of the Yemenia airplane crash (Comoros, 2009) demonstrated the challenges in diagnosing extensive unusual circular lesions found on the corpses. The objective of this study was to identify the origin and timing of occurrence (peri- or postmortem) of the lesions.A retrospective multidisciplinary study using autopsy reports (n = 113) and postmortem digital photos (n = 3 579) was conducted. Of the 113 victims recovered from the crash, 62 (54.9 %) presented unusual lesions (n = 560) with a median number of 7 (IQR 3 ∼ 13) and a maximum of 27 per corpse. The majority of lesions were elliptic (58 %) and had an area smaller than 10 cm 2 (82.1 %). Some lesions (6.8 %) also showed clear tooth notches on their edges. These findings identified most of the lesions as consistent with postmortem bite marks from cookiecutter sharks (Isistius spp.). It suggests that cookiecutter sharks were important agents in the degradation of the corpses and thus introduced potential cognitive bias in the research of the cause and manner of death. A novel set of evidence-based identification criteria for cookiecutter bite marks on human bodies is developed to facilitate more accurate medicolegal diagnosis of cookiecutter bites.

  3. Analysis of the Latitudinal Variability of Tropospheric Ozone in the Arctic Using the Large Number of Aircraft and Ozonesonde Observations in Early Summer 2008

    Science.gov (United States)

    Ancellet, Gerard; Daskalakis, Nikos; Raut, Jean Christophe; Quennehen, Boris; Ravetta, Francois; Hair, Jonathan; Tarasick, David; Schlager, Hans; Weinheimer, Andrew J.; Thompson, Anne M.; hide

    2016-01-01

    The goal of the paper are to: (1) present tropospheric ozone (O3) climatologies in summer 2008 based on a large amount of measurements, during the International Polar Year when the Polar Study using Aircraft, Remote Sensing, Surface Measurements, and Models of Climate Chemistry, Aerosols, and Transport (POLARCAT) campaigns were conducted (2) investigate the processes that determine O3 concentrations in two different regions (Canada and Greenland) that were thoroughly studied using measurements from 3 aircraft and 7 ozonesonde stations. This paper provides an integrated analysis of these observations and the discussion of the latitudinal and vertical variability of tropospheric ozone north of 55oN during this period is performed using a regional model (WFR-Chem). Ozone, CO and potential vorticity (PV) distributions are extracted from the simulation at the measurement locations. The model is able to reproduce the O3 latitudinal and vertical variability but a negative O3 bias of 6-15 ppbv is found in the free troposphere over 4 km, especially over Canada. Ozone average concentrations are of the order of 65 ppbv at altitudes above 4 km both over Canada and Greenland, while they are less than 50 ppbv in the lower troposphere. The relative influence of stratosphere-troposphere exchange (STE) and of ozone production related to the local biomass burning (BB) emissions is discussed using differences between average values of O3, CO and PV for Southern and Northern Canada or Greenland and two vertical ranges in the troposphere: 0-4 km and 4-8 km. For Canada, the model CO distribution and the weak correlation (less than 30%) of O3 and PV suggests that stratosphere troposphere exchange (STE) is not the major contribution to average tropospheric ozone at latitudes less than 70 deg N, due to the fact that local biomass burning (BB) emissions were significant during the 2008 summer period. Conversely over Greenland, significant STE is found according to the better O3 versus PV

  4. ‘Surprise’: Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012

    Science.gov (United States)

    Fearnley, Emily; Denehy, Emma

    2012-01-01

    Objective In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness. Methods A guest list was obtained and a retrospective cohort study undertaken. A combination of paper-based and telephone questionnaires were used to collect exposure and outcome information. An environmental investigation was conducted by Food Policy and Programs Branch at the implicated premises. Results All 57 guests completed the questionnaire (100% response rate), and 15 met the case definition. Analysis showed a significant association between illness and consumption of chicken liver pâté (relative risk: 16.7, 95% confidence interval: 2.4–118.6). No other food or beverage served at the party was associated with illness. Three guests submitted stool samples; all were positive for Campylobacter. The environmental investigation identified that the cooking process used in the preparation of chicken liver pâté may have been inconsistent, resulting in some portions not cooked adequately to inactivate potential Campylobacter contamination. Discussion Chicken liver products are a known source of Campylobacter infection; therefore, education of food handlers remains a high priority. To better identify outbreaks among the large number of Campylobacter notifications, routine typing of Campylobacter isolates is recommended. PMID:23908933

  5. What is a surprise earthquake? The example of the 2002, San Giuliano (Italy event

    Directory of Open Access Journals (Sweden)

    M. Mucciarelli

    2005-06-01

    Full Text Available Both in scientific literature and in the mass media, some earthquakes are defined as «surprise earthquakes». Based on his own judgment, probably any geologist, seismologist or engineer may have his own list of past «surprise earthquakes». This paper tries to quantify the underlying individual perception that may lead a scientist to apply such a definition to a seismic event. The meaning is different, depending on the disciplinary approach. For geologists, the Italian database of seismogenic sources is still too incomplete to allow for a quantitative estimate of the subjective degree of belief. For seismologists, quantification is possible defining the distance between an earthquake and its closest previous neighbor. Finally, for engineers, the San Giuliano quake could not be considered a surprise, since probabilistic site hazard estimates reveal that the change before and after the earthquake is just 4%.

  6. Conference of “Uncertainty and Surprise: Questions on Working with the Unexpected and Unknowable”

    CERN Document Server

    McDaniel, Reuben R; Uncertainty and Surprise in Complex Systems : Questions on Working with the Unexpected

    2005-01-01

    Complexity science has been a source of new insight in physical and social systems and has demonstrated that unpredictability and surprise are fundamental aspects of the world around us. This book is the outcome of a discussion meeting of leading scholars and critical thinkers with expertise in complex systems sciences and leaders from a variety of organizations sponsored by the Prigogine Center at The University of Texas at Austin and the Plexus Institute to explore strategies for understanding uncertainty and surprise. Besides distributions to the conference it includes a key digest by the editors as well as a commentary by the late nobel laureat Ilya Prigogine, "Surprises in half of a century". The book is intended for researchers and scientists in complexity science as well as for a broad interdisciplinary audience of both practitioners and scholars. It will well serve those interested in the research issues and in the application of complexity science to physical and social systems.

  7. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  8. Salience and attention in surprisal-based accounts of language processing

    Directory of Open Access Journals (Sweden)

    Alessandra eZarcone

    2016-06-01

    Full Text Available The notion of salience has been singled out as the explanatory factor for a diverse range oflinguistic phenomena. In particular, perceptual salience (e.g. visual salience of objects in the world,acoustic prominence of linguistic sounds and semantic-pragmatic salience (e.g. prominence ofrecently mentioned or topical referents have been shown to influence language comprehensionand production. A different line of research has sought to account for behavioral correlates ofcognitive load during comprehension as well as for certain patterns in language usage usinginformation-theoretic notions, such as surprisal. Surprisal and salience both affect languageprocessing at different levels, but the relationship between the two has not been adequatelyelucidated, and the question of whether salience can be reduced to surprisal / predictability isstill open. Our review identifies two main challenges in addressing this question: terminologicalinconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalise upon work in visual cognition inorder to orient ourselves in surveying the different facets of the notion of salience in linguisticsand their relation with models of surprisal. We find that work on salience highlights aspects oflinguistic communication that models of surprisal tend to overlook, namely the role of attentionand relevance to current goals, and we argue that the Predictive Coding framework provides aunified view which can account for the role played by attention and predictability at different levelsof processing and which can clarify the interplay between low and high levels of processes andbetween predictability-driven expectation and attention-driven focus.

  9. Salience and Attention in Surprisal-Based Accounts of Language Processing.

    Science.gov (United States)

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

  10. Salience and Attention in Surprisal-Based Accounts of Language Processing

    Science.gov (United States)

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus. PMID:27375525

  11. Risk, surprises and black swans fundamental ideas and concepts in risk assessment and risk management

    CERN Document Server

    Aven, Terje

    2014-01-01

    Risk, Surprises and Black Swans provides an in depth analysis of the risk concept with a focus on the critical link to knowledge; and the lack of knowledge, that risk and probability judgements are based on.Based on technical scientific research, this book presents a new perspective to help you understand how to assess and manage surprising, extreme events, known as 'Black Swans'. This approach looks beyond the traditional probability-based principles to offer a broader insight into the important aspects of uncertain events and in doing so explores the ways to manage them.

  12. Gaming the Law of Large Numbers

    Science.gov (United States)

    Hoffman, Thomas R.; Snapp, Bart

    2012-01-01

    Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…

  13. Hupa Numbers.

    Science.gov (United States)

    Bennett, Ruth, Ed.; And Others

    An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…

  14. Triangular Numbers

    Indian Academy of Sciences (India)

    Admin

    Triangular number, figurate num- ber, rangoli, Brahmagupta–Pell equation, Jacobi triple product identity. Figure 1. The first four triangular numbers. Left: Anuradha S Garge completed her PhD from. Pune University in 2008 under the supervision of Prof. S A Katre. Her research interests include K-theory and number theory.

  15. Proth Numbers

    Directory of Open Access Journals (Sweden)

    Schwarzweller Christoph

    2015-02-01

    Full Text Available In this article we introduce Proth numbers and prove two theorems on such numbers being prime [3]. We also give revised versions of Pocklington’s theorem and of the Legendre symbol. Finally, we prove Pepin’s theorem and that the fifth Fermat number is not prime.

  16. The Most Distant Mature Galaxy Cluster - Young, but surprisingly grown-up

    Science.gov (United States)

    2011-03-01

    Astronomers have used an armada of telescopes on the ground and in space, including the Very Large Telescope at ESO's Paranal Observatory in Chile to discover and measure the distance to the most remote mature cluster of galaxies yet found. Although this cluster is seen when the Universe was less than one quarter of its current age it looks surprisingly similar to galaxy clusters in the current Universe. "We have measured the distance to the most distant mature cluster of galaxies ever found", says the lead author of the study in which the observations from ESO's VLT have been used, Raphael Gobat (CEA, Paris). "The surprising thing is that when we look closely at this galaxy cluster it doesn't look young - many of the galaxies have settled down and don't resemble the usual star-forming galaxies seen in the early Universe." Clusters of galaxies are the largest structures in the Universe that are held together by gravity. Astronomers expect these clusters to grow through time and hence that massive clusters would be rare in the early Universe. Although even more distant clusters have been seen, they appear to be young clusters in the process of formation and are not settled mature systems. The international team of astronomers used the powerful VIMOS and FORS2 instruments on ESO's Very Large Telescope (VLT) to measure the distances to some of the blobs in a curious patch of very faint red objects first observed with the Spitzer space telescope. This grouping, named CL J1449+0856 [1], had all the hallmarks of being a very remote cluster of galaxies [2]. The results showed that we are indeed seeing a galaxy cluster as it was when the Universe was about three billion years old - less than one quarter of its current age [3]. Once the team knew the distance to this very rare object they looked carefully at the component galaxies using both the NASA/ESA Hubble Space Telescope and ground-based telescopes, including the VLT. They found evidence suggesting that most of the

  17. Sagan numbers

    OpenAIRE

    Mendonça, J. Ricardo G.

    2012-01-01

    We define a new class of numbers based on the first occurrence of certain patterns of zeros and ones in the expansion of irracional numbers in a given basis and call them Sagan numbers, since they were first mentioned, in a special case, by the North-american astronomer Carl E. Sagan in his science-fiction novel "Contact." Sagan numbers hold connections with a wealth of mathematical ideas. We describe some properties of the newly defined numbers and indicate directions for further amusement.

  18. Dealing with unexpected events on the flight deck : A conceptual model of startle and surprise

    NARCIS (Netherlands)

    Landman, H.M.; Groen, E.L.; Paassen, M.M. van; Bronkhorst, A.W.; Mulder, M.

    2017-01-01

    Objective: A conceptual model is proposed in order to explain pilot performance in surprising and startling situations. Background: Today’s debate around loss of control following in-flight events and the implementation of upset prevention and recovery training has highlighted the importance of

  19. Bagpipes and Artichokes: Surprise as a Stimulus to Learning in the Elementary Music Classroom

    Science.gov (United States)

    Jacobi, Bonnie Schaffhauser

    2016-01-01

    Incorporating surprise into music instruction can stimulate student attention, curiosity, and interest. Novelty focuses attention in the reticular activating system, increasing the potential for brain memory storage. Elementary ages are ideal for introducing novel instruments, pieces, composers, or styles of music. Young children have fewer…

  20. The Educational Philosophies of Mordecai Kaplan and Michael Rosenak: Surprising Similarities and Illuminating Differences

    Science.gov (United States)

    Schein, Jeffrey; Caplan, Eric

    2014-01-01

    The thoughts of Mordecai Kaplan and Michael Rosenak present surprising commonalities as well as illuminating differences. Similarities include the perception that Judaism and Jewish education are in crisis, the belief that Jewish peoplehood must include commitment to meaningful content, the need for teachers to teach from a position of…

  1. Models of Automation surprise : results of a field survey in aviation

    NARCIS (Netherlands)

    De Boer, Robert; Dekker, Sidney

    2017-01-01

    Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration

  2. Decision-making under surprise and uncertainty: Arsenic contamination of water supplies

    Science.gov (United States)

    Randhir, Timothy O.; Mozumder, Pallab; Halim, Nafisa

    2018-05-01

    With ignorance and potential surprise dominating decision making in water resources, a framework for dealing with such uncertainty is a critical need in hydrology. We operationalize the 'potential surprise' criterion proposed by Shackle, Vickers, and Katzner (SVK) to derive decision rules to manage water resources under uncertainty and ignorance. We apply this framework to managing water supply systems in Bangladesh that face severe, naturally occurring arsenic contamination. The uncertainty involved with arsenic in water supplies makes the application of conventional analysis of decision-making ineffective. Given the uncertainty and surprise involved in such cases, we find that optimal decisions tend to favor actions that avoid irreversible outcomes instead of conventional cost-effective actions. We observe that a diversification of the water supply system also emerges as a robust strategy to avert unintended outcomes of water contamination. Shallow wells had a slight higher optimal level (36%) compare to deep wells and surface treatment which had allocation levels of roughly 32% under each. The approach can be applied in a variety of other cases that involve decision making under uncertainty and surprise, a frequent situation in natural resources management.

  3. Surprising results: HIV testing and changes in contraceptive practices among young women in Malawi

    Science.gov (United States)

    Sennott, Christie; Yeatman, Sara

    2015-01-01

    This study uses eight waves of data from the population-based Tsogolo la Thanzi study (2009–2011) in rural Malawi to examine changes in young women’s contraceptive practices, including the use of condoms, non-barrier contraceptive methods, and abstinence, following positive and negative HIV tests. The analysis factors in women’s prior perceptions of their HIV status that may already be shaping their behaviour and separates surprise HIV test results from those that merely confirm what was already believed. Fixed effects logistic regression models show that HIV testing frequently affects the contraceptive practices of young Malawian women, particularly when the test yields an unexpected result. Specifically, women who are surprised to test HIV positive increase their condom use and are more likely to use condoms consistently. Following an HIV negative test (whether a surprise or expected), women increase their use of condoms and decrease their use of non-barrier contraceptives; the latter may be due to an increase in abstinence following a surprise negative result. Changes in condom use following HIV testing are robust to the inclusion of potential explanatory mechanisms including fertility preferences, relationship status, and the perception that a partner is HIV positive. The results demonstrate that both positive and negative tests can influence women’s sexual and reproductive behaviours, and emphasise the importance of conceptualizing of HIV testing as offering new information only insofar as results deviate from prior perceptions of HIV status. PMID:26160156

  4. Surprise, Memory, and Retrospective Judgment Making: Testing Cognitive Reconstruction Theories of the Hindsight Bias Effect

    Science.gov (United States)

    Ash, Ivan K.

    2009-01-01

    Hindsight bias has been shown to be a pervasive and potentially harmful decision-making bias. A review of 4 competing cognitive reconstruction theories of hindsight bias revealed conflicting predictions about the role and effect of expectation or surprise in retrospective judgment formation. Two experiments tested these predictions examining the…

  5. Changes in the number of nesting pairs and breeding success of theWhite Stork Ciconia ciconia in a large city and a neighbouring rural area in South-West Poland

    Directory of Open Access Journals (Sweden)

    Kopij Grzegorz

    2017-12-01

    Full Text Available During the years 1994–2009, the number of White Stork pairs breeding in the city of Wrocław (293 km2 fluctuated between 5 pairs in 1999 and 19 pairs 2004. Most nests were clumped in two sites in the Odra river valley. Two nests were located only cca. 1 km from the city hall. The fluctuations in numbers can be linked to the availability of feeding grounds and weather. In years when grass was mowed in the Odra valley, the number of White Storks was higher than in years when the grass was left unattended. Overall, the mean number of fledglings per successful pair during the years 1995–2009 was slightly higher in the rural than in the urban area. Contrary to expectation, the mean number of fledglings per successful pair was the highest in the year of highest population density. In two rural counties adjacent to Wrocław, the number of breeding pairs was similar to that in the city in 1994/95 (15 vs. 13 pairs. However, in 2004 the number of breeding pairs in the city almost doubled compared to that in the neighboring counties (10 vs. 19 pairs. After a sharp decline between 2004 and 2008, populations in both areas were similar in 2009 (5 vs. 4 pairs, but much lower than in 1994–1995. Wrocław is probably the only large city (>100,000 people in Poland, where the White Stork has developed a sizeable, although fluctuating, breeding population. One of the most powerful role the city-nesting White Storks may play is their ability to engage directly citizens with nature and facilitate in that way environmental education and awareness.

  6. Eulerian numbers

    CERN Document Server

    Petersen, T Kyle

    2015-01-01

    This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gamma-nonnegativity and real-rootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group. The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions. The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, ...

  7. Transfinite Numbers

    Indian Academy of Sciences (India)

    Transfinite Numbers. What is Infinity? S M Srivastava. In a series of revolutionary articles written during the last quarter of the nineteenth century, the great Ger- man mathematician Georg Cantor removed the age-old mistrust of infinity and created an exceptionally beau- tiful and useful theory of transfinite numbers. This is.

  8. Room for wind. An investigation into the possibilities for the erection of large numbers of wind turbines. Ruimte voor wind. Een studie naar de plaatsingsmogelijkheden van grote aantallen windturbines

    Energy Technology Data Exchange (ETDEWEB)

    Arkesteijn, L; Van Huis, G; Reckman, E

    1987-01-01

    The Dutch government aims to realize a wind power capacity in The Netherlands of 1000 MW in the year 2000. Environmental impacts of the erection of a large number of 200 kW and 1 MW wind turbines are studied. Four siting models have been developed in which attention is paid to environmental and economic aspects, the possibilities to introduce the electric power into the national power grid and the availability and reliability of enough wind. Noise pollution and danger for birds are to be avoided. The choice between the construction of wind parks where a number of wind turbines is concentrated in a small area or a more dispersed construction is somewhat difficult if all relevant factors are to be taken into consideration. Without government's interference the target of 1000 MW in the year 2000 will probably not be attained. It is therefore desirable to practise an active energy policy in favor of wind energy, for which many ways are possible.

  9. Chocolate Numbers

    OpenAIRE

    Ji, Caleb; Khovanova, Tanya; Park, Robin; Song, Angela

    2015-01-01

    In this paper, we consider a game played on a rectangular $m \\times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain. This paper enumerates the number of ways to break an $m \\times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we p...

  10. Number theory

    CERN Document Server

    Andrews, George E

    1994-01-01

    Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl

  11. Models of Automation Surprise: Results of a Field Survey in Aviation

    Directory of Open Access Journals (Sweden)

    Robert De Boer

    2017-09-01

    Full Text Available Automation surprises in aviation continue to be a significant safety concern and the community’s search for effective strategies to mitigate them are ongoing. The literature has offered two fundamentally divergent directions, based on different ideas about the nature of cognition and collaboration with automation. In this paper, we report the results of a field study that empirically compared and contrasted two models of automation surprises: a normative individual-cognition model and a sensemaking model based on distributed cognition. Our data prove a good fit for the sense-making model. This finding is relevant for aviation safety, since our understanding of the cognitive processes that govern human interaction with automation drive what we need to do to reduce the frequency of automation-induced events.

  12. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    Science.gov (United States)

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified

  13. Prediction, Expectation, and Surprise: Methods, Designs, and Study of a Deployed Traffic Forecasting Service

    OpenAIRE

    Horvitz, Eric J.; Apacible, Johnson; Sarin, Raman; Liao, Lin

    2012-01-01

    We present research on developing models that forecast traffic flow and congestion in the Greater Seattle area. The research has led to the deployment of a service named JamBayes, that is being actively used by over 2,500 users via smartphones and desktop versions of the system. We review the modeling effort and describe experiments probing the predictive accuracy of the models. Finally, we present research on building models that can identify current and future surprises, via efforts on mode...

  14. The effect of emotionally valenced eye region images on visuocortical processing of surprised faces.

    Science.gov (United States)

    Li, Shuaixia; Li, Ping; Wang, Wei; Zhu, Xiangru; Luo, Wenbo

    2018-05-01

    In this study, we presented pictorial representations of happy, neutral, and fearful expressions projected in the eye regions to determine whether the eye region alone is sufficient to produce a context effect. Participants were asked to judge the valence of surprised faces that had been preceded by a picture of an eye region. Behavioral results showed that affective ratings of surprised faces were context dependent. Prime-related ERPs with presentation of happy eyes elicited a larger P1 than those for neutral and fearful eyes, likely due to the recognition advantage provided by a happy expression. Target-related ERPs showed that surprised faces in the context of fearful and happy eyes elicited dramatically larger C1 than those in the neutral context, which reflected the modulation by predictions during the earliest stages of face processing. There were larger N170 with neutral and fearful eye contexts compared to the happy context, suggesting faces were being integrated with contextual threat information. The P3 component exhibited enhanced brain activity in response to faces preceded by happy and fearful eyes compared with neutral eyes, indicating motivated attention processing may be involved at this stage. Altogether, these results indicate for the first time that the influence of isolated eye regions on the perception of surprised faces involves preferential processing at the early stages and elaborate processing at the late stages. Moreover, higher cognitive processes such as predictions and attention can modulate face processing from the earliest stages in a top-down manner. © 2017 Society for Psychophysiological Research.

  15. Nice numbers

    CERN Document Server

    Barnes, John

    2016-01-01

    In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...

  16. Analysis of physiological signals for recognition of boredom, pain, and surprise emotions.

    Science.gov (United States)

    Jang, Eun-Hye; Park, Byoung-Jun; Park, Mi-Sook; Kim, Sang-Hyeob; Sohn, Jin-Hun

    2015-06-18

    The aim of the study was to examine the differences of boredom, pain, and surprise. In addition to that, it was conducted to propose approaches for emotion recognition based on physiological signals. Three emotions, boredom, pain, and surprise, are induced through the presentation of emotional stimuli and electrocardiography (ECG), electrodermal activity (EDA), skin temperature (SKT), and photoplethysmography (PPG) as physiological signals are measured to collect a dataset from 217 participants when experiencing the emotions. Twenty-seven physiological features are extracted from the signals to classify the three emotions. The discriminant function analysis (DFA) as a statistical method, and five machine learning algorithms (linear discriminant analysis (LDA), classification and regression trees (CART), self-organizing map (SOM), Naïve Bayes algorithm, and support vector machine (SVM)) are used for classifying the emotions. The result shows that the difference of physiological responses among emotions is significant in heart rate (HR), skin conductance level (SCL), skin conductance response (SCR), mean skin temperature (meanSKT), blood volume pulse (BVP), and pulse transit time (PTT), and the highest recognition accuracy of 84.7% is obtained by using DFA. This study demonstrates the differences of boredom, pain, and surprise and the best emotion recognizer for the classification of the three emotions by using physiological signals.

  17. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  18. Number names and number understanding

    DEFF Research Database (Denmark)

    Ejersbo, Lisser Rye; Misfeldt, Morten

    2014-01-01

    This paper concerns the results from the first year of a three-year research project involving the relationship between Danish number names and their corresponding digits in the canonical base 10 system. The project aims to develop a system to help the students’ understanding of the base 10 syste...... the Danish number names are more complicated than in other languages. Keywords: A research project in grade 0 and 1th in a Danish school, Base-10 system, two-digit number names, semiotic, cognitive perspectives....

  19. Funny Numbers

    Directory of Open Access Journals (Sweden)

    Theodore M. Porter

    2012-12-01

    Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.

  20. Transcendental numbers

    CERN Document Server

    Murty, M Ram

    2014-01-01

    This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.

  1. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  2. ‘Surprise’: Outbreak of Campylobacter infection associated with chicken liver pâté at a surprise birthday party, Adelaide, Australia, 2012

    Directory of Open Access Journals (Sweden)

    Emma Denehy

    2012-11-01

    Full Text Available Objective: In July 2012, an outbreak of Campylobacter infection was investigated by the South Australian Communicable Disease Control Branch and Food Policy and Programs Branch. The initial notification identified illness at a surprise birthday party held at a restaurant on 14 July 2012. The objective of the investigation was to identify the potential source of infection and institute appropriate intervention strategies to prevent further illness.Methods: A guest list was obtained and a retrospective cohort study undertaken. A combination of paper-based and telephone questionnaires were used to collect exposure and outcome information. An environmental investigation was conducted by Food Policy and Programs Branch at the implicated premises.Results: All 57 guests completed the questionnaire (100% response rate, and 15 met the case definition. Analysis showed a significant association between illness and consumption of chicken liver pâté (relative risk: 16.7, 95% confidence interval: 2.4–118.6. No other food or beverage served at the party was associated with illness. Three guests submitted stool samples; all were positive for Campylobacter. The environmental investigation identified that the cooking process used in the preparation of chicken liver pâté may have been inconsistent, resulting in some portions not cooked adequately to inactivate potential Campylobacter contamination.Discussion: Chicken liver products are a known source of Campylobacter infection; therefore, education of food handlers remains a high priority. To better identify outbreaks among the large number of Campylobacter notifications, routine typing of Campylobacter isolates is recommended.

  3. Transfinite Numbers

    Indian Academy of Sciences (India)

    this is a characteristic difference between finite and infinite sets and created an immensely useful branch of mathematics based on this idea which had a great impact on the whole of mathe- matics. For example, the question of what is a number (finite or infinite) is almost a philosophical one. However Cantor's work turned it ...

  4. The influence of the surprising decay properties of element 108 on search experiments for new elements

    International Nuclear Information System (INIS)

    Hofmann, S.; Armbruster, P.; Muenzenberg, G.; Reisdorf, W.; Schmidt, K.H.; Burkhard, H.G.; Hessberger, F.P.; Schoett, H.J.; Agarwal, Y.K.; Berthes, G.; Gollerthan, U.; Folger, H.; Hingmann, J.G.; Keller, J.G.; Leino, M.E.; Lemmertz, P.; Montoya, M.; Poppensieker, K.; Quint, B.; Zychor, I.

    1986-01-01

    Results of experiments to synthesize the heaviest elements are reported. Surprising is the high stability against fission not only of the odd and odd-odd nuclei but also of even isotopes of even elements. Alpha decay data gave an increasing stability of nuclei by shell effects up to 266 109, the heaviest known element. Theoretically, the high stability is explained by an island of nuclei with big quadrupole and hexadecapole deformations around Z=109 and N=162. Future experiments will be planned to prove the island character of these heavy nuclei. (orig.)

  5. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  6. Would you be surprised if this patient died?: Preliminary exploration of first and second year residents' approach to care decisions in critically ill patients

    Directory of Open Access Journals (Sweden)

    Armstrong John D

    2003-01-01

    Full Text Available Abstract Background How physicians approach decision-making when caring for critically ill patients is poorly understood. This study aims to explore how residents think about prognosis and approach care decisions when caring for seriously ill, hospitalized patients. Methods Qualitative study where we conducted structured discussions with first and second year internal medicine residents (n = 8 caring for critically ill patients during Medical Intensive Care Unit Ethics and Discharge Planning Rounds. Residents were asked to respond to questions beginning with "Would you be surprised if this patient died?" Results An equal number of residents responded that they would (n = 4 or would not (n = 4 be surprised if their patient died. Reasons for being surprised included the rapid onset of an acute illness, reversible disease, improving clinical course and the patient's prior survival under similar circumstances. Residents reported no surprise with worsening clinical course. Based on the realization that their patient might die, residents cited potential changes in management that included clarifying treatment goals, improving communication with families, spending more time with patients and ordering fewer laboratory tests. Perceived or implied barriers to changes in management included limited time, competing clinical priorities, "not knowing" a patient, limited knowledge and experience, presence of diagnostic or prognostic uncertainty and unclear treatment goals. Conclusions These junior-level residents appear to rely on clinical course, among other factors, when assessing prognosis and the possibility for death in severely ill patients. Further investigation is needed to understand how these factors impact decision-making and whether perceived barriers to changes in patient management influence approaches to care.

  7. Surprisal analysis of Glioblastoma Multiform (GBM) microRNA dynamics unveils tumor specific phenotype.

    Science.gov (United States)

    Zadran, Sohila; Remacle, Francoise; Levine, Raphael

    2014-01-01

    Gliomablastoma multiform (GBM) is the most fatal form of all brain cancers in humans. Currently there are limited diagnostic tools for GBM detection. Here, we applied surprisal analysis, a theory grounded in thermodynamics, to unveil how biomolecule energetics, specifically a redistribution of free energy amongst microRNAs (miRNAs), results in a system deviating from a non-cancer state to the GBM cancer -specific phenotypic state. Utilizing global miRNA microarray expression data of normal and GBM patients tumors, surprisal analysis characterizes a miRNA system response capable of distinguishing GBM samples from normal tissue biopsy samples. We indicate that the miRNAs contributing to this system behavior is a disease phenotypic state specific to GBM and is therefore a unique GBM-specific thermodynamic signature. MiRNAs implicated in the regulation of stochastic signaling processes crucial in the hallmarks of human cancer, dominate this GBM-cancer phenotypic state. With this theory, we were able to distinguish with high fidelity GBM patients solely by monitoring the dynamics of miRNAs present in patients' biopsy samples. We anticipate that the GBM-specific thermodynamic signature will provide a critical translational tool in better characterizing cancer types and in the development of future therapeutics for GBM.

  8. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  9. Numbers and other math ideas come alive

    CERN Document Server

    Pappas, Theoni

    2012-01-01

    Most people don't think about numbers, or take them for granted. For the average person numbers are looked upon as cold, clinical, inanimate objects. Math ideas are viewed as something to get a job done or a problem solved. Get ready for a big surprise with Numbers and Other Math Ideas Come Alive. Pappas explores mathematical ideas by looking behind the scenes of what numbers, points, lines, and other concepts are saying and thinking. In each story, properties and characteristics of math ideas are entertainingly uncovered and explained through the dialogues and actions of its math

  10. Charge-density waves in alpha-uranium: A story of endless surprises

    International Nuclear Information System (INIS)

    Lander, G.H.

    1982-01-01

    The properties of element 92, uranium at low temperature have remained an enigma since major anomalies in almost all physical property measurements were first reported over twenty years ago. By far the most dramatic measurements were those by Fisher on the elastic constants, which strongly suggested a structural phase transition at approx. equal to43 K. Initially no such phase transition was found. Recently, neutron inelastic experiments at Oak Ridge mapped out the phonon dispersion curves at room temperature, and in the process discovered an anomalous soft phonon of Σ 4 symmetry along the [100] axis. On cooling, weak satellites were found to form near the position [0.5, 0.0] thus signaling a periodic distortion. However, such a charge-density wave appeared to have a complex wave vector relationship with the fundamental lattice, leading the authors to introduce a two-phase model for the phase transition. Simultaneously, by using photographic technique designed to view large segments of reciprocal space, Marmeggi and Delapalme at the ILL discovered a completely new set of satellite reflections, indexable with wave vector [0.5, qsub(y), qsub(z)], where qsub(y) and qsub(z) are incommensurable (approx. equal to0.18), not equal, and vary with temperature. We have now measured the intensities of a great number of these new satellites and been able to fit the results with a modulated α-U structure. The atoms are displaced in all three independent crystallographic directions according to a sinusoidal wave form. The overall agreement between the predicted and observed structure factors is excellent, suggesting that at least the static positions of the atoms at low temperature in this element are now understood. In this review the status of research on the structural phase transition will be presented. Neither the full details of the phase transition nor the reasons for it are understood at this time. A number of further experiments are suggested. (orig.)

  11. Vascular legacy: HOPE ADVANCEs to EMPA-REG and LEADER: A Surprising similarity

    Directory of Open Access Journals (Sweden)

    Sanjay Kalra

    2017-01-01

    Full Text Available Recently reported cardiovascular outcome studies on empagliflozin (EMPA-REG and liraglutide (LEADER have spurred interest in this field of diabetology. This commentary compares and contrasts these studies with two equally important outcome trials conducted using blood pressure lowering agents. A comparison with MICROHOPE (using ramipril and ADVANCE (using perindopril + indapamide blood pressure arms throws up interesting facts. The degree of blood pressure lowering, dissociation between cardiovascular and cerebrovascular benefits, and discordance between renal and retinal outcomes are surprisingly similar in these trials, conducted using disparate molecules. The time taken to achieve such benefits is similar for all drugs except empagliflozin. Such discussion helps inform rational and evidence-based choice of therapy and forms the framework for future research.

  12. Probing Critical Point Energies of Transition Metal Dichalcogenides: Surprising Indirect Gap of Single Layer WSe 2

    KAUST Repository

    Zhang, Chendong

    2015-09-21

    By using a comprehensive form of scanning tunneling spectroscopy, we have revealed detailed quasi-particle electronic structures in transition metal dichalcogenides, including the quasi-particle gaps, critical point energy locations, and their origins in the Brillouin zones. We show that single layer WSe surprisingly has an indirect quasi-particle gap with the conduction band minimum located at the Q-point (instead of K), albeit the two states are nearly degenerate. We have further observed rich quasi-particle electronic structures of transition metal dichalcogenides as a function of atomic structures and spin-orbit couplings. Such a local probe for detailed electronic structures in conduction and valence bands will be ideal to investigate how electronic structures of transition metal dichalcogenides are influenced by variations of local environment.

  13. Probing Critical Point Energies of Transition Metal Dichalcogenides: Surprising Indirect Gap of Single Layer WSe 2

    KAUST Repository

    Zhang, Chendong; Chen, Yuxuan; Johnson, Amber; Li, Ming-yang; Li, Lain-Jong; Mende, Patrick C.; Feenstra, Randall M.; Shih, Chih Kang

    2015-01-01

    By using a comprehensive form of scanning tunneling spectroscopy, we have revealed detailed quasi-particle electronic structures in transition metal dichalcogenides, including the quasi-particle gaps, critical point energy locations, and their origins in the Brillouin zones. We show that single layer WSe surprisingly has an indirect quasi-particle gap with the conduction band minimum located at the Q-point (instead of K), albeit the two states are nearly degenerate. We have further observed rich quasi-particle electronic structures of transition metal dichalcogenides as a function of atomic structures and spin-orbit couplings. Such a local probe for detailed electronic structures in conduction and valence bands will be ideal to investigate how electronic structures of transition metal dichalcogenides are influenced by variations of local environment.

  14. Surprising judgments about robot drivers: Experiments on rising expectations and blaming humans

    Directory of Open Access Journals (Sweden)

    Peter Danielson

    2015-05-01

    Full Text Available N-Reasons is an experimental Internet survey platform designed to enhance public participation in applied ethics and policy. N-Reasons encourages individuals to generate reasons to support their judgments, and groups to converge on a common set of reasons pro and con various issues.  In the Robot Ethics Survey some of the reasons contributed surprising judgments about autonomous machines. Presented with a version of the trolley problem with an autonomous train as the agent, participants gave unexpected answers, revealing high expectations for the autonomous machine and shifting blame from the automated device to the humans in the scenario. Further experiments with a standard pair of human-only trolley problems refine these results. While showing the high expectations even when no autonomous machine is involved, human bystanders are only blamed in the machine case. A third experiment explicitly aimed at responsibility for driverless cars confirms our findings about shifting blame in the case of autonomous machine agents. We conclude methodologically that both results point to the power of an experimental survey based approach to public participation to explore surprising assumptions and judgments in applied ethics. However, both results also support using caution when interpreting survey results in ethics, demonstrating the importance of qualitative data to provide further context for evaluating judgments revealed by surveys. On the ethics side, the result about shifting blame to humans interacting with autonomous machines suggests caution about the unintended consequences of intuitive principles requiring human responsibility.http://dx.doi.org/10.5324/eip.v9i1.1727

  15. Hillslope, river, and Mountain: some surprises in Landscape evolution (Ralph Alger Bagnold Medal Lecture)

    Science.gov (United States)

    Tucker, G. E.

    2012-04-01

    Geomorphology, like the rest of geoscience, has always had two major themes: a quest to understand the earth's history and 'products' - its landscapes and seascapes - and, in parallel, a quest to understand its formative processes. This dualism is manifest in the remarkable career of R. A. Bagnold, who was inspired by landforms such as dunes, and dedicated to understanding the physical processes that shaped them. His legacy inspires us to emulate two principles at the heart of his contributions: the benefits of rooting geomorphic theory in basic physics, and the importance of understanding geomorphic systems in terms of simple equations framed around energy or force. Today, following Bagnold's footsteps, the earth-surface process community is engaged in a quest to build, test, and refine an ever-improving body of theory to describe our planet's surface and its evolution. In this lecture, I review a small sample of some of the fruits of that quest, emphasizing the value of surprises encountered along the way. The first example involves models of long-term river incision into bedrock. When the community began to grapple with how to represent this process mathematically, several different ideas emerged. Some were based on the assumption that sediment transport is the limiting factor; others assumed that hydraulic stress on rock is the key, while still others treated rivers as first-order 'reactors.' Thanks in part to advances in digital topography and numerical computing, the predictions of these models can be tested using natural-experiment case studies. Examples from the King Range, USA, the Central Apennines, Italy, and the fold-thrust belt of Taiwan, illustrate that independent knowledge of history and/or tectonics makes it possible to quantify how the rivers have responded to external forcing. Some interesting surprises emerge, such as: that the relief-uplift relationship can be highly nonlinear in a steady-state landscape because of grain-entrainment thresholds

  16. Corn Ethanol: The Surprisingly Effective Route for Natural Gas Consumption in the Transportation Sector

    Energy Technology Data Exchange (ETDEWEB)

    Szybist, James P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Curran, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-05-01

    Proven reserves and production of natural gas (NG) in the United States have increased dramatically in the last decade, due largely to the commercialization of hydraulic fracturing. This has led to a plentiful supply of NG, resulting in a significantly lower cost on a gallon of gasoline-equivalent (GGE) basis. Additionally, NG is a domestic, non-petroleum source of energy that is less carbon-intensive than coal or petroleum products, and thus can lead to lower greenhouse gas emissions. Because of these factors, there is a desire to increase the use of NG in the transportation sector in the United States (U.S.). However, using NG directly in the transportation sector requires that several non-trivial challenges be overcome. One of these issues is the fueling infrastructure. There are currently only 1,375 NG fueling stations in the U.S. compared to 152,995 fueling stations for gasoline in 2014. Additionally, there are very few light-duty vehicles that can consume this fuel directly as dedicated or bi-fuel options. For example, in model year 2013Honda was the only OEM to offer a dedicated CNG sedan while a number of others offered CNG options as a preparation package for LD trucks and vans. In total, there were a total of 11 vehicle models in 2013 that could be purchased that could use natural gas directly. There are additional potential issues associated with NG vehicles as well. Compared to commercial refueling stations, the at-home refueling time for NG vehicles is substantial – a result of the small compressors used for home refilling. Additionally, the methane emissions from both refueling (leakage) and from tailpipe emissions (slip) from these vehicles can add to their GHG footprint, and while these emissions are not currently regulated it could be a barrier in the future, especially in scenarios with broad scale adoption of CNG vehicles. However, NG consumption already plays a large role in other sectors of the economy, including some that are important to

  17. Triangular Numbers, Gaussian Integers, and KenKen

    Science.gov (United States)

    Watkins, John J.

    2012-01-01

    Latin squares form the basis for the recreational puzzles sudoku and KenKen. In this article we show how useful several ideas from number theory are in solving a KenKen puzzle. For example, the simple notion of triangular number is surprisingly effective. We also introduce a variation of KenKen that uses the Gaussian integers in order to…

  18. Cerebral metastasis masquerading as cerebritis: A case of misguiding history and radiological surprise!

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2013-01-01

    Full Text Available Cerebral metastases usually have a characteristic radiological appearance. They can be differentiated rather easily from any infective etiology. Similarly, positive medical history also guides the neurosurgeon towards the possible diagnosis and adds to the diagnostic armamentarium. However, occasionally, similarities on imaging may be encountered where even history could lead us in the wrong direction and tends to bias the clinician. We report a case of a 40-year-old female with a history of mastoidectomy for otitis media presenting to us with a space occupying lesion in the right parietal region, which was thought pre-operatively as an abscess along with the cerebritis. Surprisingly, the histopathology proved it to be a metastatic adenocarcinoma. Hence, a ring enhancing lesion may be a high grade neoplasm/metastasis/abscess, significant gyral enhancement; a feature of cerebritis is not linked with a neoplastic etiology more often. This may lead to delayed diagnosis, incorrect prognostication and treatment in patients having coincidental suggestive history of infection. We review the literature and highlight the key points helping to differentiate an infective from a neoplastic pathology which may look similar at times.

  19. Beyond interests and institutions: US health policy reform and the surprising silence of big business.

    Science.gov (United States)

    Smyrl, Marc E

    2014-02-01

    Interest-based arguments do not provide satisfying explanations for the surprising reticence of major US employers to take a more active role in the debate surrounding the 2010 Patient Protection and Affordable Care Act (ACA). Through focused comparison with the Bismarckian systems of France and Germany, on the one hand, and with the 1950s and 1960s in the United States, on the other, this article concludes that while institutional elements do account for some of the observed behavior of big business, a necessary complement to this is a fuller understanding of the historically determined legitimating ideology of US firms. From the era of the "corporate commonwealth," US business inherited the principles of private welfare provision and of resistance to any expansion of government control. Once complementary, these principles are now mutually exclusive: employer-provided health insurance increasingly is possible only at the cost of ever-increasing government subsidy and regulation. Paralyzed by the uncertainty that followed from this clash of legitimate ideas, major employers found themselves unable to take a coherent and unified stand for or against the law. As a consequence, they failed either to oppose it successfully or to secure modifications to it that would have been useful to them.

  20. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    Science.gov (United States)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  1. Pseudohalide (SCN(-))-Doped MAPbI3 Perovskites: A Few Surprises.

    Science.gov (United States)

    Halder, Ansuman; Chulliyil, Ramya; Subbiah, Anand S; Khan, Tuhin; Chattoraj, Shyamtanu; Chowdhury, Arindam; Sarkar, Shaibal K

    2015-09-03

    Pseudohalide thiocyanate anion (SCN(-)) has been used as a dopant in a methylammonium lead tri-iodide (MAPbI3) framework, aiming for its use as an absorber layer for photovoltaic applications. The substitution of SCN(-) pseudohalide anion, as verified using Fourier transform infrared (FT-IR) spectroscopy, results in a comprehensive effect on the optical properties of the original material. Photoluminescence measurements at room temperature reveal a significant enhancement in the emission quantum yield of MAPbI3-x(SCN)x as compared to MAPbI3, suggestive of suppression of nonradiative channels. This increased intensity is attributed to a highly edge specific emission from MAPbI3-x(SCN)x microcrystals as revealed by photoluminescence microscopy. Fluoresence lifetime imaging measurements further established contrasting carrier recombination dynamics for grain boundaries and the bulk of the doped material. Spatially resolved emission spectroscopy on individual microcrystals of MAPbI3-x(SCN)x reveals that the optical bandgap and density of states at various (local) nanodomains are also nonuniform. Surprisingly, several (local) emissive regions within MAPbI3-x(SCN)x microcrystals are found to be optically unstable under photoirradiation, and display unambiguous temporal intermittency in emission (blinking), which is extremely unusual and intriguing. We find diverse blinking behaviors for the undoped MAPbI3 crystals as well, which leads us to speculate that blinking may be a common phenomenon for most hybrid perovskite materials.

  2. From Lithium-Ion to Sodium-Ion Batteries: Advantages, Challenges, and Surprises.

    Science.gov (United States)

    Nayak, Prasant Kumar; Yang, Liangtao; Brehm, Wolfgang; Adelhelm, Philipp

    2018-01-02

    Mobile and stationary energy storage by rechargeable batteries is a topic of broad societal and economical relevance. Lithium-ion battery (LIB) technology is at the forefront of the development, but a massively growing market will likely put severe pressure on resources and supply chains. Recently, sodium-ion batteries (SIBs) have been reconsidered with the aim of providing a lower-cost alternative that is less susceptible to resource and supply risks. On paper, the replacement of lithium by sodium in a battery seems straightforward at first, but unpredictable surprises are often found in practice. What happens when replacing lithium by sodium in electrode reactions? This review provides a state-of-the art overview on the redox behavior of materials when used as electrodes in lithium-ion and sodium-ion batteries, respectively. Advantages and challenges related to the use of sodium instead of lithium are discussed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Effect of Temperature Shock and Inventory Surprises on Natural Gas and Heating Oil Futures Returns

    Science.gov (United States)

    Hu, John Wei-Shan; Lin, Chien-Yu

    2014-01-01

    The aim of this paper is to examine the impact of temperature shock on both near-month and far-month natural gas and heating oil futures returns by extending the weather and storage models of the previous study. Several notable findings from the empirical studies are presented. First, the expected temperature shock significantly and positively affects both the near-month and far-month natural gas and heating oil futures returns. Next, significant temperature shock has effect on both the conditional mean and volatility of natural gas and heating oil prices. The results indicate that expected inventory surprises significantly and negatively affects the far-month natural gas futures returns. Moreover, volatility of natural gas futures returns is higher on Thursdays and that of near-month heating oil futures returns is higher on Wednesdays than other days. Finally, it is found that storage announcement for natural gas significantly affects near-month and far-month natural gas futures returns. Furthermore, both natural gas and heating oil futures returns are affected more by the weighted average temperature reported by multiple weather reporting stations than that reported by a single weather reporting station. PMID:25133233

  4. Surprises from the resolution of operator mixing in N=4 SYM

    International Nuclear Information System (INIS)

    Bianchi, Massimo; Rossi, Giancarlo; Stanev, Yassen S.

    2004-01-01

    We reexamine the problem of operator mixing in N=4 SYM. Particular attention is paid to the correct definition of composite gauge invariant local operators, which is necessary for the computation of their anomalous dimensions beyond lowest order. As an application we reconsider the case of operators with naive dimension Δ 0 =4, already studied in the literature. Stringent constraints from the resummation of logarithms in power behaviours are exploited and the role of the generalized N=4 Konishi anomaly in the mixing with operators involving fermions is discussed. A general method for the explicit (numerical) resolution of the operator mixing and the computation of anomalous dimensions is proposed. We then resolve the order g 2 mixing for the 15 (purely scalar) singlet operators of naive dimension Δ 0 =6. Rather surprisingly we find one isolated operator which has a vanishing anomalous dimension up to order g 4 , belonging to an apparently long multiplet. We also solve the order g 2 mixing for the 26 operators belonging to the representation 20' of SU(4). We find an operator with the same one-loop anomalous dimension as the Konishi multiplet

  5. A conceptual geochemical model of the geothermal system at Surprise Valley, CA

    Science.gov (United States)

    Fowler, Andrew P. G.; Ferguson, Colin; Cantwell, Carolyn A.; Zierenberg, Robert A.; McClain, James; Spycher, Nicolas; Dobson, Patrick

    2018-03-01

    Characterizing the geothermal system at Surprise Valley (SV), northeastern California, is important for determining the sustainability of the energy resource, and mitigating hazards associated with hydrothermal eruptions that last occurred in 1951. Previous geochemical studies of the area attempted to reconcile different hot spring compositions on the western and eastern sides of the valley using scenarios of dilution, equilibration at low temperatures, surface evaporation, and differences in rock type along flow paths. These models were primarily supported using classical geothermometry methods, and generally assumed that fluids in the Lake City mud volcano area on the western side of the valley best reflect the composition of a deep geothermal fluid. In this contribution, we address controls on hot spring compositions using a different suite of geochemical tools, including optimized multicomponent geochemistry (GeoT) models, hot spring fluid major and trace element measurements, mineralogical observations, and stable isotope measurements of hot spring fluids and precipitated carbonates. We synthesize the results into a conceptual geochemical model of the Surprise Valley geothermal system, and show that high-temperature (quartz, Na/K, Na/K/Ca) classical geothermometers fail to predict maximum subsurface temperatures because fluids re-equilibrated at progressively lower temperatures during outflow, including in the Lake City area. We propose a model where hot spring fluids originate as a mixture between a deep thermal brine and modern meteoric fluids, with a seasonally variable mixing ratio. The deep brine has deuterium values at least 3 to 4‰ lighter than any known groundwater or high-elevation snow previously measured in and adjacent to SV, suggesting it was recharged during the Pleistocene when meteoric fluids had lower deuterium values. The deuterium values and compositional characteristics of the deep brine have only been identified in thermal springs and

  6. Explanatory models of health and disease: surprises from within the former Soviet Union

    Directory of Open Access Journals (Sweden)

    Tatiana I Andreeva

    2013-06-01

    Full Text Available Extract The review of anthropological theories as applied to public health by Jennifer J. Carroll (Carroll, 2013 published in this issue of TCPHEE made me recollect my first and most surprising discoveries of how differently same things can be understood in different parts of the world. Probably less unexpectedly, these impressions concern substance abuse and addiction behaviors, similarly to many examples deployed by Jennifer J. Carroll. The first of these events happened soon after the break-up of the Soviet Union when some of the most active people from the West rushed to discover what was going on behind the opening iron curtain. A director of an addiction clinic, who had just come into contact with a Dutch counterpart, invited me to join the collaboration and the innovation process he planned to launch. Being a participant of the exchange program started within this collaboration, I had an opportunity to discover how addictive behaviors were understood and explained in books (English, 1961; Kooyman, 1992; Viorst, 1986 recommended by the colleagues in the Netherlands and, as I could observe with my own eyes, addressed in everyday practice. This was a jaw-dropping contrast to what I learnt at the soviet medical university and some post-graduate courses, where all the diseases related to alcohol, tobacco, or drug abuse were considered predominantly a result of the substance intake. In the Soviet discourse, the intake itself was understood as 'willful and deliberate' or immoral behavior which, in some cases, was to be rectified in prison-like treatment facilities. In the West, quite oppositely, substance abuse was seen rather as a consequence of a constellation of life-course adversities thoroughly considered by developmental psychology. This approach was obviously deeply ingrained in how practitioners diagnosed and treated their patients.

  7. Metaproteomics of cellulose methanisation under thermophilic conditions reveals a surprisingly high proteolytic activity.

    Science.gov (United States)

    Lü, Fan; Bize, Ariane; Guillot, Alain; Monnet, Véronique; Madigou, Céline; Chapleur, Olivier; Mazéas, Laurent; He, Pinjing; Bouchez, Théodore

    2014-01-01

    Cellulose is the most abundant biopolymer on Earth. Optimising energy recovery from this renewable but recalcitrant material is a key issue. The metaproteome expressed by thermophilic communities during cellulose anaerobic digestion was investigated in microcosms. By multiplying the analytical replicates (65 protein fractions analysed by MS/MS) and relying solely on public protein databases, more than 500 non-redundant protein functions were identified. The taxonomic community structure as inferred from the metaproteomic data set was in good overall agreement with 16S rRNA gene tag pyrosequencing and fluorescent in situ hybridisation analyses. Numerous functions related to cellulose and hemicellulose hydrolysis and fermentation catalysed by bacteria related to Caldicellulosiruptor spp. and Clostridium thermocellum were retrieved, indicating their key role in the cellulose-degradation process and also suggesting their complementary action. Despite the abundance of acetate as a major fermentation product, key methanogenesis enzymes from the acetoclastic pathway were not detected. In contrast, enzymes from the hydrogenotrophic pathway affiliated to Methanothermobacter were almost exclusively identified for methanogenesis, suggesting a syntrophic acetate oxidation process coupled to hydrogenotrophic methanogenesis. Isotopic analyses confirmed the high dominance of the hydrogenotrophic methanogenesis. Very surprising was the identification of an abundant proteolytic activity from Coprothermobacter proteolyticus strains, probably acting as scavenger and/or predator performing proteolysis and fermentation. Metaproteomics thus appeared as an efficient tool to unravel and characterise metabolic networks as well as ecological interactions during methanisation bioprocesses. More generally, metaproteomics provides direct functional insights at a limited cost, and its attractiveness should increase in the future as sequence databases are growing exponentially.

  8. [Fall from height--surprising autopsy diagnosis in primarily unclear initial situations].

    Science.gov (United States)

    Schyma, Christian; Doberentz, Elke; Madea, Burkhard

    2012-01-01

    External post-mortem examination and first police assessments are often not consistent with subsequent autopsy results. This is all the more surprising the more serious the injuries found at autopsy are. Such discrepancies result especially from an absence of gross external injuries, as demonstrated by four examples. A 42-year-old, externally uninjured male was found at night time in a helpless condition in the street and died in spite of resuscitation. Autopsy showed severe polytrauma with traumatic brain injury and lesions of the thoracic and abdominal organs. A jump from the third floor was identified as the cause. At dawn, a twenty-year-old male was found dead on the grounds of the adjacent house. Because of the blood-covered head the police assumed a traumatic head injury by strike impact. The external examination revealed only abrasions on the forehead and to a minor extent on the back. At autopsy a midfacial fracture, a trauma of the thorax and abdomen and fractures of the spine and pelvis were detected. Afterwards investigations showed that the man, intoxicated by alcohol, had fallen from the flat roof of a multistoried house. A 77-year-old man was found unconscious on his terrace at day time; a cerebral seizure was assumed. He was transferred to emergency care where he died. The corpse was externally inconspicuous. Autopsy revealed serious traumatic injuries of the brain, thorax, abdomen and pelvis, which could be explained by a fall from the balcony. A 47-year-old homeless person without any external injuries was found dead in a barn. An alcohol intoxication was assumed. At autopsy severe injuries of the brain and cervical spine were found which were the result of a fall from a height of 5 m. On the basis of an external post-mortem examination alone gross blunt force trauma cannot be reliably excluded.

  9. Virtual Volatility, an Elementary New Concept with Surprising Stock Market Consequences

    Science.gov (United States)

    Prange, Richard; Silva, A. Christian

    2006-03-01

    Textbook investors start by predicting the future price distribution, PDF, of a candidate stock (or portfolio) at horizon T, e.g. a year hence. A (log)normal PDF with center (=drift =expected return) μT and width (=volatility) σT is often assumed on Central Limit Theorem grounds, i.e. by a random walk of daily (log)price increments δs. The standard deviation, stdev, of historical (ex post) δs `s is usually a fair predictor of the coming year's (ex ante) stdev(δs) = σdaily, but the historical mean E(δs) at best roughly limits the true, to be predicted, drift by μtrueT˜ μhistT ± σhistT. Textbooks take a PDF with σ ˜ σdaily and μ as somehow known, as if accurate predictions of μ were possible. It is elementary and presumably new to argue that an average of PDF's over a range of μ values should be taken, e.g. an average over forecasts by different analysts. We estimate that this leads to a PDF with a `virtual' volatility σ ˜ 1.3σdaily. It is indeed clear that uncertainty in the value of the expected gain parameter increases the risk of investment in that security by most measures, e. g. Sharpe's ratio μT/σT will be 30% smaller because of this effect. It is significant and surprising that there are investments which benefit from this 30% virtual increase in the volatility

  10. Technological monitoring radar: a weak signals interpretation tool for the identification of strategic surprises

    Directory of Open Access Journals (Sweden)

    Adalton Ozaki

    2011-07-01

    Full Text Available In the current competitive scenario, marked by rapid and constant changes, it is vital that companies actively monitor the business environment, in search of signs which might anticipate changes. This study poses to propose and discuss a tool called Technological Monitoring Radar, which endeavours to address the following query: “How can a company systematically monitor the environment and capture signs that anticipate opportunities and threats concerning a particular technology?”. The literature review covers Competitive Intelligence, Technological Intelligence, Environmental Analysis and Anticipative Monitoring. Based on the critical analysis of the literature, a tool called Technological Monitoring Radar is proposed comprising five environments to be monitored (political, economical, technological, social and competition each of which with key topics for analysis. To exemplify the use of the tool, it is applied to the smartphone segment in an exclusively reflexive manner, and without the participation of a specific company. One of the suggestions for future research is precisely the application of the proposed methodology in an actual company. Despite the limitation of this being a theoretical study, the example demonstrated the tool´s applicability. The radar prove to be very useful for a company that needs to monitor the environment in search of signs of change. This study´s main contribution is to relate different fields of study (technological intelligence, environmental analysis and anticipative monitoring and different approaches to provide a practical tool that allows a manager to identify and better visualize opportunities and threats, thus avoiding strategic surprises in the technological arena.Key words: Technological monitoring. Technological intelligence. Competitive intelligence. Weak signals.

  11. The genome of Pelobacter carbinolicus reveals surprising metabolic capabilities and physiological features

    Energy Technology Data Exchange (ETDEWEB)

    Aklujkar, Muktak [University of Massachusetts, Amherst; Haveman, Shelley [University of Massachusetts, Amherst; DiDonatoJr, Raymond [University of Massachusetts, Amherst; Chertkov, Olga [Los Alamos National Laboratory (LANL); Han, Cliff [Los Alamos National Laboratory (LANL); Land, Miriam L [ORNL; Brown, Peter [University of Massachusetts, Amherst; Lovley, Derek [University of Massachusetts, Amherst

    2012-01-01

    Background: The bacterium Pelobacter carbinolicus is able to grow by fermentation, syntrophic hydrogen/formate transfer, or electron transfer to sulfur from short-chain alcohols, hydrogen or formate; it does not oxidize acetate and is not known to ferment any sugars or grow autotrophically. The genome of P. carbinolicus was sequenced in order to understand its metabolic capabilities and physiological features in comparison with its relatives, acetate-oxidizing Geobacter species. Results: Pathways were predicted for catabolism of known substrates: 2,3-butanediol, acetoin, glycerol, 1,2-ethanediol, ethanolamine, choline and ethanol. Multiple isozymes of 2,3-butanediol dehydrogenase, ATP synthase and [FeFe]-hydrogenase were differentiated and assigned roles according to their structural properties and genomic contexts. The absence of asparagine synthetase and the presence of a mutant tRNA for asparagine encoded among RNA-active enzymes suggest that P. carbinolicus may make asparaginyl-tRNA in a novel way. Catabolic glutamate dehydrogenases were discovered, implying that the tricarboxylic acid (TCA) cycle can function catabolically. A phosphotransferase system for uptake of sugars was discovered, along with enzymes that function in 2,3-butanediol production. Pyruvate: ferredoxin/flavodoxin oxidoreductase was identified as a potential bottleneck in both the supply of oxaloacetate for oxidation of acetate by the TCA cycle and the connection of glycolysis to production of ethanol. The P. carbinolicus genome was found to encode autotransporters and various appendages, including three proteins with similarity to the geopilin of electroconductive nanowires. Conclusions: Several surprising metabolic capabilities and physiological features were predicted from the genome of P. carbinolicus, suggesting that it is more versatile than anticipated.

  12. A surprisingly simple correlation between the classical and quantum structural networks in liquid water

    Science.gov (United States)

    Hamm, Peter; Fanourgakis, George S.; Xantheas, Sotiris S.

    2017-08-01

    Nuclear quantum effects in liquid water have profound implications for several of its macroscopic properties related to the structure, dynamics, spectroscopy, and transport. Although several of water's macroscopic properties can be reproduced by classical descriptions of the nuclei using interaction potentials effectively parameterized for a narrow range of its phase diagram, a proper account of the nuclear quantum effects is required to ensure that the underlying molecular interactions are transferable across a wide temperature range covering different regions of that diagram. When performing an analysis of the hydrogen-bonded structural networks in liquid water resulting from the classical (class) and quantum (qm) descriptions of the nuclei with two interaction potentials that are at the two opposite ends of the range in describing quantum effects, namely the flexible, pair-wise additive q-TIP4P/F, and the flexible, polarizable TTM3-F, we found that the (class) and (qm) results can be superimposed over the temperature range T = 250-350 K using a surprisingly simple, linear scaling of the two temperatures according to T(qm) = α T(class) + ΔT, where α = 0.99 and ΔT = -6 K for q-TIP4P/F and α = 1.24 and ΔT = -64 K for TTM3-F. This simple relationship suggests that the structural networks resulting from the quantum and classical treatment of the nuclei with those two very different interaction potentials are essentially similar to each other over this extended temperature range once a model-dependent linear temperature scaling law is applied.

  13. A post-genomic surprise. The molecular reinscription of race in science, law and medicine.

    Science.gov (United States)

    Duster, Troy

    2015-03-01

    The completion of the first draft of the Human Genome Map in 2000 was widely heralded as the promise and future of genetics-based medicines and therapies - so much so that pundits began referring to the new century as 'The Century of Genetics'. Moreover, definitive assertions about the overwhelming similarities of all humans' DNA (99.9 per cent) by the leaders of the Human Genome Project were trumpeted as the end of racial thinking about racial taxonomies of human genetic differences. But the first decade of the new century brought unwelcomed surprises. First, gene therapies turned out to be far more complicated than any had anticipated - and instead the pharmaceutical industry turned to a focus on drugs that might be 'related' to population differences based upon genetic markers. While the language of 'personalized medicine' dominated this frame, research on racially and ethnically designated populations differential responsiveness to drugs dominated the empirical work in the field. Ancestry testing and 'admixture research' would play an important role in a new kind of molecular reification of racial categories. Moreover, the capacity of the super-computer to map differences reverberated into personal identification that would affect both the criminal justice system and forensic science, and generate new levels of concern about personal privacy. Social scientists in general, and sociologists in particular, have been caught short by these developments - relying mainly on assertions that racial categories are socially constructed, regionally and historically contingent, and politically arbitrary. While these assertions are true, the imprimatur of scientific legitimacy has shifted the burden, since now 'admixture research' can claim that its results get at the 'reality' of human differentiation, not the admittedly flawed social constructions of racial categories. Yet what was missing from this framing of the problem: 'admixture research' is itself based upon socially

  14. Modern Sedimentation along the SE Bangladesh Coast Reveal Surprisingly Low Accumulation Rates

    Science.gov (United States)

    McHugh, C.; Mustaque, S.; Mondal, D. R.; Akhter, S. H.; Iqbal, M.

    2016-12-01

    Recent sediments recovered along the SE coast of Bangladesh, from Teknaf to Cox's Bazar and drainage basin analyses reveal sediment sources and very low sedimentation rates of 1mm/year. These low rates are surprisingly low given that this coast is adjacent to the Ganges-Brahmaputra Delta with a yearly discharge of 1GT. The Teknaf anticline (elevation 200 m), part of the western Burma fold-thrust belt dominates the topography extending across and along the Teknaf peninsula. It is thought to have begun evolving since the Miocene (Alam et al. 2003 & Allen et al. 2008). Presently the anticline foothills on the west are flanked by uplifted terraces, the youngest linked to coseismic displacement during the 1762 earthquake (Mondal et al. 2015), and a narrow beach 60-200 m in width. Petrography, semi-quantitative bulk mineralogy and SEM/EDX analyses were conducted on sediments recovered along the west coast from 1-4 m deep trenches and three 4-8 m deep drill holes. GIS mapping of drainage basins and quartz-feldspar-lithic (QFL) ternary plots based on grain counting show mixing of sediments from multiple sources: Himalayan provenance of metamorphic and igneous origin (garnet-mostly almandine, tourmaline, rutile, kyanite, zircon, sillimanite and clinopyroxene) similar to Uddin et al. (2007); Brahmaputra provenance of igneous and metamorphic origin (amphibole, epidote, plagioclase 40% Na and 60% Ca, apatite, ilmenite, magnetite, Cr-spinel and garnet-mostly grossular,) as indicated by Garzanti et al. (2010) & Rahman et al. (2016) and Burmese sources (cassiterite and wolframite) (Zaw 1990 & Searle et al. 2007). Low sedimentation rates are the result of two main factors: 1. Strong longshore currents from the south-east that interact with high tidal ranges as evidenced by the morphology of sand waves and ridge and runnel landforms along the beach. 2. Streams draining the Teknaf anticline are dry during the winter and during summer monsoon rains, the sediments bypass the narrow

  15. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  16. Surprise! Infants Consider Possible Bases of Generalization for a Single Input Example

    Science.gov (United States)

    Gerken, LouAnn; Dawson, Colin; Chatila, Razanne; Tenenbaum, Josh

    2015-01-01

    Infants have been shown to generalize from a small number of input examples. However, existing studies allow two possible means of generalization. One is via a process of noting similarities shared by several examples. Alternatively, generalization may reflect an implicit desire to explain the input. The latter view suggests that generalization…

  17. A Western Professor in Singapore: Cross-Cultural Readings, Expectations, and Surprises in the Classroom

    Science.gov (United States)

    Freeman, Bradley

    2015-01-01

    The educational field is seeing an increased growth in English-language teaching opportunities abroad. This situation gives rise to a number of interesting research inquiries. For example, can teaching experience in one cultural context translate well into another? What do studies tell us about cross-cultural awareness and effectiveness of those…

  18. Surprising results from a search for effective disinfectants for Tobacco mosaic virus-contaminated tools

    Science.gov (United States)

    Tobacco mosaic virus (TMV) and four other tobamoviruses infected multiple petunia cultivars without producing obvious viral symptoms. A single cutting event on a TMV-infected plant was sufficient for transmission to many plants subsequently cut with the same clippers. A number of 'old standbys' an...

  19. 3rd year final contractor report for: U.S. Department of Energy Stewardship Science Academic Alliances Program Project Title: Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm J. Andrews

    2006-01-01

    This project had two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. This report describes work done in the last twelve (12) months of the project, and also contains a summary of the complete work done over the three (3) life of the project. As of April 1, 2006, the air/helium facility (Task 1) is now complete and extensive testing and validation of diagnostics has been performed. Initial condition studies (Task 2) is also complete. Detailed experiments with air/helium with Atwood numbers up to 0.1 have been completed, and Atwood numbers of 0.25. Within the last three (3) months we have been able to successfully run the facility at Atwood numbers of 0.5. The progress matches the project plan, as does the budget. We have finished the initial condition studies using the water channel, and this work has been accepted for publication on the Journal of Fluid Mechanics (the top fluid mechanics journal). Mr. Nick Mueschke and Mr. Wayne Kraft are continuing with their studies to obtain PhDs in the same field, and will also continue their collaboration visits to LANL and LLNL. Over its three (3) year life the project has supported two(2) Ph.D.'s and three (3) MS's, and produced nine (9) international journal publications, twenty four (24) conference publications, and numerous other reports. The highlight of the project has been our close collaboration with LLNL (Dr. Oleg Schilling) and LANL (Drs. Dimonte, Ristorcelli, Gore, and Harlow)

  20. A multiwavelength survey of H I-excess galaxies with surprisingly inefficient star formation

    Science.gov (United States)

    Geréb, K.; Janowiecki, S.; Catinella, B.; Cortese, L.; Kilborn, V.

    2018-05-01

    We present the results of a multiwavelength survey of H I-excess galaxies, an intriguing population with large H I reservoirs associated with little current star formation. These galaxies have stellar masses M⋆ > 1010 M⊙, and were identified as outliers in the gas fraction versus NUV-r colour and stellar mass surface density scaling relations based on the GALEX Arecibo SDSS Survey (GASS). We obtained H I interferometry with the Giant Metrewave Radio Telescope, Keck optical long-slit spectroscopy, and deep optical imaging (where available) for four galaxies. Our analysis reveals multiple possible reasons for the H I excess in these systems. One galaxy, AGC 10111, shows an H I disc that is counter-rotating with respect to the stellar bulge, a clear indication of external origin of the gas. Another galaxy appears to host a Malin 1-type disc, where a large specific angular momentum has to be invoked to explain the extreme M_{H I}/M⋆ ratio of 166 per cent. The other two galaxies have early-type morphology with very high gas fractions. The lack of merger signatures (unsettled gas, stellar shells, and streams) in these systems suggests that these gas-rich discs have been built several Gyr ago, but it remains unclear how the gas reservoirs were assembled. Numerical simulations of large cosmological volumes are needed to gain insight into the formation of these rare and interesting systems.

  1. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  2. Number Sense on the Number Line

    Science.gov (United States)

    Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni

    2018-01-01

    A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…

  3. The influence of psychological resilience on the relation between automatic stimulus evaluation and attentional breadth for surprised faces.

    Science.gov (United States)

    Grol, Maud; De Raedt, Rudi

    2015-01-01

    The broaden-and-build theory relates positive emotions to resilience and cognitive broadening. The theory proposes that the broadening effects underly the relation between positive emotions and resilience, suggesting that resilient people can benefit more from positive emotions at the level of cognitive functioning. Research has investigated the influence of positive emotions on attentional broadening, but the stimulus in the target of attention may also influence attentional breadth, depending on affective stimulus evaluation. Surprised faces are particularly interesting as they are valence ambiguous, therefore, we investigated the relation between affective evaluation--using an affective priming task--and attentional breadth for surprised faces, and how this relation is influenced by resilience. Results show that more positive evaluations are related to more attentional broadening at high levels of resilience, while this relation is reversed at low levels. This indicates that resilient individuals can benefit more from attending to positively evaluated stimuli at the level of attentional broadening.

  4. Self-organizing weights for Internet AS-graphs and surprisingly simple routing metrics

    DEFF Research Database (Denmark)

    Scholz, Jan Carsten; Greiner, Martin

    The transport capacity of Internet-like communication networks and hence their efficiency may be improved by a factor of 5-10 through the use of highly optimized routing metrics, as demonstrated previously. Numerical determination of such routing metrics can be computationally demanding...... metrics. The new metrics have negligible computational cost and result in an approximately 5-fold performance increase, providing distinguished competitiveness with the computationally costly counterparts. They are applicable to very large networks and easy to implement in today's Internet routing...

  5. Team play with a powerful and independent agent: operational experiences and automation surprises on the Airbus A-320

    Science.gov (United States)

    Sarter, N. B.; Woods, D. D.

    1997-01-01

    Research and operational experience have shown that one of the major problems with pilot-automation interaction is a lack of mode awareness (i.e., the current and future status and behavior of the automation). As a result, pilots sometimes experience so-called automation surprises when the automation takes an unexpected action or fails to behave as anticipated. A lack of mode awareness and automation surprises can he viewed as symptoms of a mismatch between human and machine properties and capabilities. Changes in automation design can therefore he expected to affect the likelihood and nature of problems encountered by pilots. Previous studies have focused exclusively on early generation "glass cockpit" aircraft that were designed based on a similar automation philosophy. To find out whether similar difficulties with maintaining mode awareness are encountered on more advanced aircraft, a corpus of automation surprises was gathered from pilots of the Airbus A-320, an aircraft characterized by high levels of autonomy, authority, and complexity. To understand the underlying reasons for reported breakdowns in human-automation coordination, we also asked pilots about their monitoring strategies and their experiences with and attitude toward the unique design of flight controls on this aircraft.

  6. The Super Patalan Numbers

    OpenAIRE

    Richardson, Thomas M.

    2014-01-01

    We introduce the super Patalan numbers, a generalization of the super Catalan numbers in the sense of Gessel, and prove a number of properties analagous to those of the super Catalan numbers. The super Patalan numbers generalize the super Catalan numbers similarly to how the Patalan numbers generalize the Catalan numbers.

  7. Testing an emerging paradigm in migration ecology shows surprising differences in efficiency between flight modes.

    Directory of Open Access Journals (Sweden)

    Adam E Duerr

    Full Text Available To maximize fitness, flying animals should maximize flight speed while minimizing energetic expenditure. Soaring speeds of large-bodied birds are determined by flight routes and tradeoffs between minimizing time and energetic costs. Large raptors migrating in eastern North America predominantly glide between thermals that provide lift or soar along slopes or ridgelines using orographic lift (slope soaring. It is usually assumed that slope soaring is faster than thermal gliding because forward progress is constant compared to interrupted progress when birds pause to regain altitude in thermals. We tested this slope-soaring hypothesis using high-frequency GPS-GSM telemetry devices to track golden eagles during northbound migration. In contrast to expectations, flight speed was slower when slope soaring and eagles also were diverted from their migratory path, incurring possible energetic costs and reducing speed of progress towards a migratory endpoint. When gliding between thermals, eagles stayed on track and fast gliding speeds compensated for lack of progress during thermal soaring. When thermals were not available, eagles minimized migration time, not energy, by choosing energetically expensive slope soaring instead of waiting for thermals to develop. Sites suited to slope soaring include ridges preferred for wind-energy generation, thus avian risk of collision with wind turbines is associated with evolutionary trade-offs required to maximize fitness of time-minimizing migratory raptors.

  8. Testing an emerging paradigm in migration ecology shows surprising differences in efficiency between flight modes.

    Science.gov (United States)

    Duerr, Adam E; Miller, Tricia A; Lanzone, Michael; Brandes, Dave; Cooper, Jeff; O'Malley, Kieran; Maisonneuve, Charles; Tremblay, Junior; Katzner, Todd

    2012-01-01

    To maximize fitness, flying animals should maximize flight speed while minimizing energetic expenditure. Soaring speeds of large-bodied birds are determined by flight routes and tradeoffs between minimizing time and energetic costs. Large raptors migrating in eastern North America predominantly glide between thermals that provide lift or soar along slopes or ridgelines using orographic lift (slope soaring). It is usually assumed that slope soaring is faster than thermal gliding because forward progress is constant compared to interrupted progress when birds pause to regain altitude in thermals. We tested this slope-soaring hypothesis using high-frequency GPS-GSM telemetry devices to track golden eagles during northbound migration. In contrast to expectations, flight speed was slower when slope soaring and eagles also were diverted from their migratory path, incurring possible energetic costs and reducing speed of progress towards a migratory endpoint. When gliding between thermals, eagles stayed on track and fast gliding speeds compensated for lack of progress during thermal soaring. When thermals were not available, eagles minimized migration time, not energy, by choosing energetically expensive slope soaring instead of waiting for thermals to develop. Sites suited to slope soaring include ridges preferred for wind-energy generation, thus avian risk of collision with wind turbines is associated with evolutionary trade-offs required to maximize fitness of time-minimizing migratory raptors.

  9. [The significance of a large number of health insurance funds and fusions for health services research with statutory health insurance data in Germany - experiences of the lidA study].

    Science.gov (United States)

    March, S; Powietzka, J; Stallmann, C; Swart, E

    2015-02-01

    Since 1970 the health insurance system in Germany has shrunk by more than 90% to 132 statutory health insurance funds (SHI) at present. For studies using data from different SHI, this development means a reduction of contacts and a higher workload when requesting data. The latter is due to the fact that fusions bind resources in the health insurance funds. In order to avoid selection in studies among the insured, all SHI must be contacted. Additionally, 15 controlling institutions on the state and national level have to agree as determined in § 75 of the German Social Code number 10. The lidA study - a German cohort study on work, age and health intends to link primary and secondary data from all SHI of those insured who have given their agreement for participation. Since the beginning of the study in 2009 the number of SHI has been reduced by 70. Of the 6 585 interviews in 2011 approximately half of the interviewees agreed in written form that their individual health insurance data can be linked. This portion of the insured is dispersed among 95 SHI. At this point, 11 contracts with SHI are realised (approximately 50% of the insured) and 8 data controlling authorities have been contacted. The problems involved in the fusion of SHI and its meaning for research are explained in this article. The fusion of SHI makes sense for the long term. It will lead to a reduction of contacts and contracts that researchers have to establish in order to analyse the data. Therefore, this article also discusses the alternative of creating a meta-data set of all the data from the different SHI combined. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Surprising discovery of high-grade pelitic schist on Cavalli Seamount, offshore Northland

    International Nuclear Information System (INIS)

    Mortimer, N.; Walker, N.H.; Herzer, R.H.; Calvert, A.; Seward, D.

    1999-01-01

    Serendipity waits in the world's unexplored places. In March 1999, GNS's ONSIDE expedition to the South Fiji Basin made two dredge hauls on Cavalli seamount expecting to recover Miocene oceanic volcanic rocks. Instead dredge DIA recored 1 kg of 10 cm slickensided and chloritised schist pieces and dredge DIB, which was 1.1 km distant, 100 kg of well-foliated biotite schist, mainly in the form of two large subangular boulders. The presence of calsilicate bands initially suggested a correlation with Buller or Takaka Terranes, but whole rock chemical composition of the schist is more consistent with Murihiku, Waipapa or Pahau Terranes. U-Pb TIMS gheochronometry of individual detrital zircons indicates a maximum 170Ma depositional age for the schist protolith, thus clearly ruling out a Buller or Takaka correlation. Rb-Sr whole rock data suggest a young (Cenozoic?) metamorphic age. (author)

  11. Thanks to CERN's team of surveyors, the Organization's stand at the Night of Science attracted a large number of visitors : the technology and tools used by the surveyors, such as the Terrameter shown here, attracted many visitors to the CERN stand

    CERN Multimedia

    2004-01-01

    Thanks to CERN's team of surveyors, the Organization's stand at the Night of Science attracted a large number of visitors : the technology and tools used by the surveyors, such as the Terrameter shown here, attracted many visitors to the CERN stand

  12. Universal power-law diet partitioning by marine fish and squid with surprising stability–diversity implications

    Science.gov (United States)

    Rossberg, Axel G.; Farnsworth, Keith D.; Satoh, Keisuke; Pinnegar, John K.

    2011-01-01

    A central question in community ecology is how the number of trophic links relates to community species richness. For simple dynamical food-web models, link density (the ratio of links to species) is bounded from above as the number of species increases; but empirical data suggest that it increases without bounds. We found a new empirical upper bound on link density in large marine communities with emphasis on fish and squid, using novel methods that avoid known sources of bias in traditional approaches. Bounds are expressed in terms of the diet-partitioning function (DPF): the average number of resources contributing more than a fraction f to a consumer's diet, as a function of f. All observed DPF follow a functional form closely related to a power law, with power-law exponents independent of species richness at the measurement accuracy. Results imply universal upper bounds on link density across the oceans. However, the inherently scale-free nature of power-law diet partitioning suggests that the DPF itself is a better defined characterization of network structure than link density. PMID:21068048

  13. Mitigating Aviation Communication and Satellite Orbit Operations Surprises from Adverse Space Weather

    Science.gov (United States)

    Tobiska, W. Kent

    2008-01-01

    Adverse space weather affects operational activities in aviation and satellite systems. For example, large solar flares create highly variable enhanced neutral atmosphere and ionosphere electron density regions. These regions impact aviation communication frequencies as well as precision orbit determination. The natural space environment, with its dynamic space weather variability, is additionally changed by human activity. The increase in orbital debris in low Earth orbit (LEO), combined with lower atmosphere CO2 that rises into the lower thermosphere and causes increased cooling that results in increased debris lifetime, adds to the environmental hazards of navigating in near-Earth space. This is at a time when commercial space endeavors are posed to begin more missions to LEO during the rise of the solar activity cycle toward the next maximum (2012). For satellite and aviation operators, adverse space weather results in greater expenses for orbit management, more communication outages or aviation and ground-based high frequency radio used, and an inability to effectively plan missions or service customers with space-based communication, imagery, and data transferal during time-critical activities. Examples of some revenue-impacting conditions and solutions for mitigating adverse space weather are offered.

  14. Heuristics can produce surprisingly rational probability estimates: Comment on Costello and Watts (2014).

    Science.gov (United States)

    Nilsson, Håkan; Juslin, Peter; Winman, Anders

    2016-01-01

    Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). (c) 2015 APA, all rights reserved).

  15. Collaborative Resilience to Episodic Shocks and Surprises: A Very Long-Term Case Study of Zanjera Irrigation in the Philippines 1979–2010

    Directory of Open Access Journals (Sweden)

    Ruth Yabes

    2015-07-01

    Full Text Available This thirty-year case study uses surveys, semi-structured interviews, and content analysis to examine the adaptive capacity of Zanjera San Marcelino, an indigenous irrigation management system in the northern Philippines. This common pool resource (CPR system exists within a turbulent social-ecological system (SES characterized by episodic shocks such as large typhoons as well as novel surprises, such as national political regime change and the construction of large dams. The Zanjera nimbly responded to these challenges, although sometimes in ways that left its structure and function substantially altered. While a partial integration with the Philippine National Irrigation Agency was critical to the Zanjera’s success, this relationship required on-going improvisation and renegotiation. Over time, the Zanjera showed an increasing capacity to learn and adapt. A core contribution of this analysis is the integration of a CPR study within an SES framework to examine resilience, made possible the occurrence of a wide range of challenges to the Zanjera’s function and survival over the long period of study. Long-term analyses like this one, however rare, are particularly useful for understanding the adaptive and transformative dimensions of resilience.

  16. Surprise in simplicity: an unusual spectral evolution of a single pulse GRB 151006A

    Science.gov (United States)

    Basak, R.; Iyyani, S.; Chand, V.; Chattopadhyay, T.; Bhattacharya, D.; Rao, A. R.; Vadawale, S. V.

    2017-11-01

    We present a detailed analysis of GRB 151006A, the first gamma-ray burst (GRB) detected by AstroSat Cadmium-Zinc-Telluride Imager (CZTI). We study the long-term spectral evolution by exploiting the capabilities of Fermi and Swift satellites at different phases, which is complemented by the polarization measurement with the CZTI. While the light curve of the GRB in different energy bands shows a simple pulse profile, the spectrum shows an unusual evolution. The first phase exhibits a hard-to-soft evolution until ∼16-20 s, followed by a sudden increase in the spectral peak reaching a few MeV. Such a dramatic change in the spectral evolution in the case of a single pulse burst is reported for the first time. This is captured by all models we used namely, Band function, blackbody+Band and two blackbodies+power law. Interestingly, the Fermi Large Area Telescope also detects its first photon (>100 MeV) during this time. This new injection of energy may be associated with either the beginning of afterglow phase, or a second hard pulse of the prompt emission itself that, however, is not seen in the otherwise smooth pulse profile. By constructing Bayesian blocks and studying the hardness evolution we find a good evidence for a second hard pulse. The Swift data at late epochs (>T90 of the GRB) also show a significant spectral evolution consistent with the early second phase. The CZTI data (100-350 keV), though having low significance (1σ), show high values of polarization in the two epochs (77-94 per cent), in agreement with our interpretation.

  17. Detection and Interpretation of Low-Level and High-Level Surprising and Important Events in Large-Scale Data Streams

    Science.gov (United States)

    2016-06-28

    a current (start) state to one or several desired (goal) state or states. Note that like in standard search in Artificial Intelligence , the goal may...datasets. Publications: J. Zhao, C. Siagian, L. Itti, Fixation Bank : Learning to Reweight Fixation Candidates, In: Proc. IEEE Confer- ence on Computer...eyeglasses, In: Proc. IEEE/RSJ International Con- ference on Intelligent Robots and Systems (IROS), pp. 5674-5679, Nov 2013. [2013 acceptance rate: 43%] A

  18. Communication Management and Trust: Their Role in Building Resilience to "Surprises" Such As Natural Disasters, Pandemic Flu, and Terrorism

    Directory of Open Access Journals (Sweden)

    P. H. Longstaff

    2008-06-01

    Full Text Available In times of public danger such as natural disasters and health emergencies, a country's communication systems will be some of its most important assets because access to information will make individuals and groups more resilient. Communication by those charged with dealing with the situation is often critical. We analyzed reports from a wide variety of crisis incidents and found a direct correlation between trust and an organization's preparedness and internal coordination of crisis communication and the effectiveness of its leadership. Thus, trust is one of the most important variables in effective communication management in times of "surprise."

  19. Number-unconstrained quantum sensing

    Science.gov (United States)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  20. Large transverse momentum phenomena

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1977-09-01

    It is pointed out that it is particularly significant that the quantum numbers of the leading particles are strongly correlated with the quantum numbers of the incident hadrons indicating that the valence quarks themselves are transferred to large p/sub t/. The crucial question is how they get there. Various hadron reactions are discussed covering the structure of exclusive reactions, inclusive reactions, normalization of inclusive cross sections, charge correlations, and jet production at large transverse momentum. 46 references

  1. Number words and number symbols a cultural history of numbers

    CERN Document Server

    Menninger, Karl

    1992-01-01

    Classic study discusses number sequence and language and explores written numerals and computations in many cultures. "The historian of mathematics will find much to interest him here both in the contents and viewpoint, while the casual reader is likely to be intrigued by the author's superior narrative ability.

  2. Visuospatial Priming of the Mental Number Line

    Science.gov (United States)

    Stoianov, Ivilin; Kramer, Peter; Umilta, Carlo; Zorzi, Marco

    2008-01-01

    It has been argued that numbers are spatially organized along a "mental number line" that facilitates left-hand responses to small numbers, and right-hand responses to large numbers. We hypothesized that whenever the representations of visual and numerical space are concurrently activated, interactions can occur between them, before response…

  3. teaching multiplication of large positive whole numbers using ...

    African Journals Online (AJOL)

    KEY WORDS: Grating Method, History of Mathematics, Long Multiplication. ... The Wolfram mathworld (n.d.) opined that the ... A further simple random sampling was carried out to select an intact class of 40 students from each of the sampled ...

  4. Boll weevil: experimental sterilization of large numbers by fractionated irradiation

    International Nuclear Information System (INIS)

    Haynes, J.W.; Wright, J.E.; Davich, T.B.; Roberson, J.; Griffin, J.G.; Darden, E.

    1978-01-01

    Boll weevils, Anthonomus grandis grandis Boheman, 9 days after egg implantation in the larval diet were transported from the Boll Weevil Research Laboratory, Mississippi State, MS, to the Comparative Animal Research Laboratory, Oak Ridge, TN, and irradiated with 6.9 krad (test 1) or 7.2 krad (test 2) of 60 Co gamma rays delivered in 25 equal doses over 100 h. In test 1, from 600 individual pairs of T (treated) males x N (normal) females, only 114 eggs hatched from a sample of 950 eggs, and 47 adults emerged from a sample of 1042 eggs. Also, from 600 pairs of T females x N males, 6 eggs hatched of a sample of 6 eggs and 12 adults emerged from a sample of 20 eggs. In test 2, from 700 individual pairs of T males x N females, 54 eggs hatched from a sample of 1510, and 10 adults emerged from a sample of 1703 eggs. Also, in T females x N males matings, 1 egg hatched of a sample of 3, and no adults emerged from a sample of 4. Transportation and handling in the 2nd test reduced adult emergence an avg of 49%. Thus the 2 replicates in test 2 resulted in 3.4 x 10 5 and 4.3 x 10 5 irradiated weevils emerging/day for 7 days. Bacterial contamination of weevils was low

  5. Files synchronization from a large number of insertions and deletions

    Science.gov (United States)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  6. Large numbers hypothesis. IV - The cosmological constant and quantum physics

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    In standard physics quantum field theory is based on a flat vacuum space-time. This quantum field theory predicts a nonzero cosmological constant. Hence the gravitational field equations do not admit a flat vacuum space-time. This dilemma is resolved using the units covariant gravitational field equations. This paper shows that the field equations admit a flat vacuum space-time with nonzero cosmological constant if and only if the canonical LNH is valid. This allows an interpretation of the LNH phenomena in terms of a time-dependent vacuum state. If this is correct then the cosmological constant must be positive.

  7. The large Reynolds number - Asymptotic theory of turbulent boundary layers.

    Science.gov (United States)

    Mellor, G. L.

    1972-01-01

    A self-consistent, asymptotic expansion of the one-point, mean turbulent equations of motion is obtained. Results such as the velocity defect law and the law of the wall evolve in a relatively rigorous manner, and a systematic ordering of the mean velocity boundary layer equations and their interaction with the main stream flow are obtained. The analysis is extended to the turbulent energy equation and to a treatment of the small scale equilibrium range of Kolmogoroff; in velocity correlation space the two-thirds power law is obtained. Thus, the two well-known 'laws' of turbulent flow are imbedded in an analysis which provides a great deal of other information.

  8. Rabi-vibronic resonance with large number of vibrational quanta

    OpenAIRE

    Glenn, R.; Raikh, M. E.

    2011-01-01

    We study theoretically the Rabi oscillations of a resonantly driven two-level system linearly coupled to a harmonic oscillator (vibrational mode) with frequency, \\omega_0. We show that for weak coupling, \\omega_p \\ll \\omega_0, where \\omega_p is the polaronic shift, Rabi oscillations are strongly modified in the vicinity of the Rabi-vibronic resonance \\Omega_R = \\omega_0, where \\Omega_R is the Rabi frequency. The width of the resonance is (\\Omega_R-\\omega_0) \\sim \\omega_p^{2/3} \\omega_0^{1/3} ...

  9. Our prescription drugs kill us in large numbers

    DEFF Research Database (Denmark)

    Gøtzsche, Peter C

    2014-01-01

    Our prescription drugs are the third leading cause of death after heart disease and cancer in the United States and Europe. Around half of those who die have taken their drugs correctly; the other half die because of errors, such as too high a dose or use of a drug despite contraindications. Our...

  10. Diamond Fuzzy Number

    Directory of Open Access Journals (Sweden)

    T. Pathinathan

    2015-01-01

    Full Text Available In this paper we define diamond fuzzy number with the help of triangular fuzzy number. We include basic arithmetic operations like addition, subtraction of diamond fuzzy numbers with examples. We define diamond fuzzy matrix with some matrix properties. We have defined Nested diamond fuzzy number and Linked diamond fuzzy number. We have further classified Right Linked Diamond Fuzzy number and Left Linked Diamond Fuzzy number. Finally we have verified the arithmetic operations for the above mentioned types of Diamond Fuzzy Numbers.

  11. Surprising finding on colonoscopy.

    Science.gov (United States)

    Griglione, Nicole; Naik, Jahnavi; Christie, Jennifer

    2010-02-01

    A 48-year-old man went to his primary care physician for his annual physical. He told his physician that for the past few years, he had intermittent, painless rectal bleeding consisting of small amounts of blood on the toilet paper after defecation. He also mentioned that he often spontaneously awoke, very early in the morning. His past medical history was unremarkable. The patient was born in Cuba but had lived in the United States for more than 30 years. He was divorced, lived alone, and had no children. He had traveled to Latin America-including Mexico, Brazil, and Cuba-off and on over the past 10 years. His last trip was approximately 2 years ago. His physical exam was unremarkable. Rectal examination revealed no masses or external hemorrhoids; stool was brown and Hemoccult negative. Labs were remarkable for eosinophilia ranging from 10% to 24% over the past several years (the white blood cell count ranged from 5200 to 5900/mcL). A subsequent colonoscopy revealed many white, thin, motile organisms dispersed throughout the colon. The organisms were most densely populated in the cecum. Of note, the patient also had nonbleeding internal hemorrhoids. An aspiration of the organisms was obtained and sent to the microbiology lab for further evaluation. What is your diagnosis? How would you manage this condition?

  12. Surprising quantum bounces

    CERN Document Server

    Nesvizhevsky, Valery

    2015-01-01

    This unique book demonstrates the undivided unity and infinite diversity of quantum mechanics using a single phenomenon: quantum bounces of ultra-cold particles. Various examples of such "quantum bounces" are: gravitational quantum states of ultra-cold neutrons (the first observed quantum states of matter in a gravitational field), the neutron whispering gallery (an observed matter-wave analog of the whispering gallery effect well known in acoustics and for electromagnetic waves), and gravitational and whispering gallery states for anti-matter atoms that remain to be observed. These quantum states are an invaluable tool in the search for additional fundamental short-range forces, for exploring the gravitational interaction and quantum effects of gravity, for probing physics beyond the standard model, and for furthering studies into the foundations of quantum mechanics, quantum optics, and surface science.

  13. More Supernova Surprises

    Science.gov (United States)

    2010-09-24

    originated in South America. E veryone appreciates the beauty of dai- sies, chrysanthemums, and sunfl ow- ers, and many of us enjoy eating lettuce ...few fossils. On page 1621 of this issue, Barreda et al. ( 1) describe an unusually well-preserved new fossil that sheds light on the history of

  14. Surprising radiation detectors

    CERN Document Server

    Fleischer, Robert

    2003-01-01

    Radiation doses received by the human body can be measured indirectly and retrospectively by counting the tracks left by particles in ordinary objects like pair of spectacles, glassware, compact disks...This method has been successfully applied to determine neutron radiation doses received 50 years ago on the Hiroshima site. Neutrons themselves do not leave tracks in bulk matter but glass contains atoms of uranium that may fission when hurt by a neutron, the recoil of the fission fragments generates a track that is detectable. The most difficult is to find adequate glass items and to evaluate the radiation shield they benefited at their initial place. The same method has been used to determine the radiation dose due to the pile-up of radon in houses. In that case the tracks left by alpha particles due to the radioactive decay of polonium-210 have been counted on the superficial layer of the window panes. Other materials like polycarbonate plastics have been used to determine the radiation dose due to heavy io...

  15. More statistics, less surprise

    CERN Multimedia

    Antonella Del Rosso & the LHCb collaboration

    2013-01-01

    The LHCb collaboration has recently announced new results for a parameter that measures the CP violation effect in particles containing charm quarks. The new values obtained with a larger data set and with a new independent method are showing that the effect is smaller than previous measurements had  suggested. The parameter is back into the Standard Model picture.   CP violation signals – in particles containing charm quarks, such as the D0 particle, is a powerful probe of new physics. Indeed, such effects could result in unexpected values of parameters whose expectation values in the Standard Model are known. Although less precise than similar approaches used in particles made of b quarks, the investigation of the charm system has proven  to be intriguing. The LHCb collaboration has reported new measurements of ΔACP, the difference in CP violation between the D0→K+K– and D0→π+π– decays. The results are ob...

  16. From Calculus to Number Theory

    Indian Academy of Sciences (India)

    A. Raghuram

    2016-11-04

    Nov 4, 2016 ... diverges to infinity. This means given any number M, however large, we can add sufficiently many terms in the above series to make the sum larger than M. This was first proved by Nicole Oresme (1323-1382), a brilliant. French philosopher of his times.

  17. Building Numbers from Primes

    Science.gov (United States)

    Burkhart, Jerry

    2009-01-01

    Prime numbers are often described as the "building blocks" of natural numbers. This article shows how the author and his students took this idea literally by using prime factorizations to build numbers with blocks. In this activity, students explore many concepts of number theory, including the relationship between greatest common factors and…

  18. Introduction to number theory

    CERN Document Server

    Vazzana, Anthony; Garth, David

    2007-01-01

    One of the oldest branches of mathematics, number theory is a vast field devoted to studying the properties of whole numbers. Offering a flexible format for a one- or two-semester course, Introduction to Number Theory uses worked examples, numerous exercises, and two popular software packages to describe a diverse array of number theory topics.

  19. How to use the Fast Fourier Transform in Large Finite Fields

    OpenAIRE

    Petersen, Petur Birgir

    2011-01-01

    The article contents suggestions on how to perform the Fast Fourier Transform over Large Finite Fields. The technique is to use the fact that the multiplicative groups of specific prime fields are surprisingly composite.

  20. On the number of special numbers

    Indian Academy of Sciences (India)

    without loss of any generality to be the first k primes), then the equation a + b = c has .... This is an elementary exercise in partial summation (see [12]). Thus ... This is easily done by inserting a stronger form of the prime number theorem into the.

  1. Large complex ovarian cyst managed by laparoscopy

    OpenAIRE

    Dipak J. Limbachiya; Ankit Chaudhari; Grishma P. Agrawal

    2017-01-01

    Complex ovarian cyst with secondary infection is a rare disease that hardly responds to the usual antibiotic treatment. Most of the times, it hampers day to day activities of women. It is commonly known to cause pain and fever. To our surprise, in our case the cyst was large enough to compress the ureter and it was adherent to the surrounding structures. Laparoscopic removal of the cyst was done and specimen was sent for histopathological examination.

  2. Classical theory of algebraic numbers

    CERN Document Server

    Ribenboim, Paulo

    2001-01-01

    Gauss created the theory of binary quadratic forms in "Disquisitiones Arithmeticae" and Kummer invented ideals and the theory of cyclotomic fields in his attempt to prove Fermat's Last Theorem These were the starting points for the theory of algebraic numbers, developed in the classical papers of Dedekind, Dirichlet, Eisenstein, Hermite and many others This theory, enriched with more recent contributions, is of basic importance in the study of diophantine equations and arithmetic algebraic geometry, including methods in cryptography This book has a clear and thorough exposition of the classical theory of algebraic numbers, and contains a large number of exercises as well as worked out numerical examples The Introduction is a recapitulation of results about principal ideal domains, unique factorization domains and commutative fields Part One is devoted to residue classes and quadratic residues In Part Two one finds the study of algebraic integers, ideals, units, class numbers, the theory of decomposition, iner...

  3. It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers

    Science.gov (United States)

    Goodwin, Chris; Ortiz, Enrique

    2015-01-01

    Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…

  4. p-adic numbers

    OpenAIRE

    Grešak, Rozalija

    2015-01-01

    The field of real numbers is usually constructed using Dedekind cuts. In these thesis we focus on the construction of the field of real numbers using metric completion of rational numbers using Cauchy sequences. In a similar manner we construct the field of p-adic numbers, describe some of their basic and topological properties. We follow by a construction of complex p-adic numbers and we compare them with the ordinary complex numbers. We conclude the thesis by giving a motivation for the int...

  5. On the number of special numbers

    Indian Academy of Sciences (India)

    We now apply the theory of the Thue equation to obtain an effective bound on m. Indeed, by Lemma 3.2, we can write m2 = ba3 and m2 − 4 = cd3 with b, c cubefree. By the above, both b, c are bounded since they are cubefree and all their prime factors are less than e63727. Now we have a finite number of. Thue equations:.

  6. Surprising Ripple Effects: How Changing the SAT Score-Sending Policy for Low-Income Students Impacts College Access and Success

    Science.gov (United States)

    Hurwitz, Michael; Mbekeani, Preeya P.; Nipson, Margaret M.; Page, Lindsay C.

    2017-01-01

    Subtle policy adjustments can induce relatively large "ripple effects." We evaluate a College Board initiative that increased the number of free SAT score reports available to low-income students and changed the time horizon for using these score reports. Using a difference-in-differences analytic strategy, we estimate that targeted…

  7. Number projection method

    International Nuclear Information System (INIS)

    Kaneko, K.

    1987-01-01

    A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states

  8. Numbers and brains.

    Science.gov (United States)

    Gallistel, C R

    2017-12-01

    The representation of discrete and continuous quantities appears to be ancient and pervasive in animal brains. Because numbers are the natural carriers of these representations, we may discover that in brains, it's numbers all the way down.

  9. Number in Dinka

    DEFF Research Database (Denmark)

    Andersen, Torben

    2014-01-01

    had a marked singular and an unmarked plural. Synchronically, however, the singular is arguably the basic member of the number category as revealed by the use of the two numbers. In addition, some nouns have a collective form, which is grammatically singular. Number also plays a role...

  10. Safety-in-numbers

    DEFF Research Database (Denmark)

    Elvik, Rune; Bjørnskau, Torkel

    2017-01-01

    Highlights •26 studies of the safety-in-numbers effect are reviewed. •The existence of a safety-in-numbers effect is confirmed. •Results are consistent. •Causes of the safety-in-numbers effect are incompletely known....

  11. Discovery: Prime Numbers

    Science.gov (United States)

    de Mestre, Neville

    2008-01-01

    Prime numbers are important as the building blocks for the set of all natural numbers, because prime factorisation is an important and useful property of all natural numbers. Students can discover them by using the method known as the Sieve of Eratosthenes, named after the Greek geographer and astronomer who lived from c. 276-194 BC. Eratosthenes…

  12. Number to finger mapping is topological.

    NARCIS (Netherlands)

    Plaisier, M.A.; Smeets, J.B.J.

    2011-01-01

    It has been shown that humans associate fingers with numbers because finger counting strategies interact with numerical judgements. At the same time, there is evidence that there is a relation between number magnitude and space as small to large numbers seem to be represented from left to right. In

  13. Applied number theory

    CERN Document Server

    Niederreiter, Harald

    2015-01-01

    This textbook effectively builds a bridge from basic number theory to recent advances in applied number theory. It presents the first unified account of the four major areas of application where number theory plays a fundamental role, namely cryptography, coding theory, quasi-Monte Carlo methods, and pseudorandom number generation, allowing the authors to delineate the manifold links and interrelations between these areas.  Number theory, which Carl-Friedrich Gauss famously dubbed the queen of mathematics, has always been considered a very beautiful field of mathematics, producing lovely results and elegant proofs. While only very few real-life applications were known in the past, today number theory can be found in everyday life: in supermarket bar code scanners, in our cars’ GPS systems, in online banking, etc.  Starting with a brief introductory course on number theory in Chapter 1, which makes the book more accessible for undergraduates, the authors describe the four main application areas in Chapters...

  14. Surprisingly high specificity of the PPD skin test for M. tuberculosis infection from recent exposure in The Gambia.

    Science.gov (United States)

    Hill, Philip C; Brookes, Roger H; Fox, Annette; Jackson-Sillah, Dolly; Lugos, Moses D; Jeffries, David J; Donkor, Simon A; Adegbola, Richard A; McAdam, Keith P W J

    2006-12-20

    Options for intervention against Mycobacterium tuberculosis infection are limited by the diagnostic tools available. The Purified Protein Derivative (PPD) skin test is thought to be non-specific, especially in tropical settings. We compared the PPD skin test with an ELISPOT test in The Gambia. Household contacts over six months of age of sputum smear positive TB cases and community controls were recruited. They underwent a PPD skin test and an ELISPOT test for the T cell response to PPD and ESAT-6/CFP10 antigens. Responsiveness to M. tuberculosis exposure was analysed according to sleeping proximity to an index case using logistic regression. 615 household contacts and 105 community controls were recruited. All three tests assessed increased significantly in positivity with increasing M. tuberculosis exposure, the PPD skin test most dramatically (OR 15.7; 95% CI 6.6-35.3). While the PPD skin test positivity continued to trend downwards in the community with increasing distance from a known case (61.9% to 14.3%), the PPD and ESAT-6/CFP-10 ELISPOT positivity did not. The PPD skin test was more in agreement with ESAT-6/CFP-10 ELISPOT (75%, p = 0.01) than the PPD ELISPOT (53%, pPPD skin test positive increased (pPPD skin test negative decreased (pPPD skin test has surprisingly high specificity for M. tuberculosis infection from recent exposure in The Gambia. In this setting, anti-tuberculous prophylaxis in PPD skin test positive individuals should be revisited.

  15. Surprising transformation of a block copolymer into a high performance polystyrene ultrafiltration membrane with a hierarchically organized pore structure

    KAUST Repository

    Shevate, Rahul

    2018-02-08

    We describe the preparation of hierarchical polystyrene nanoporous membranes with a very narrow pore size distribution and an extremely high porosity. The nanoporous structure is formed as a result of unusual degradation of the poly(4-vinyl pyridine) block from self-assembled poly(styrene)-b-poly(4-vinyl pyridine) (PS-b-P4VP) membranes through the formation of an unstable pyridinium intermediate in an alkaline medium. During this process, the confined swelling and controlled degradation produced a tunable pore size. We unequivocally confirmed the successful elimination of the P4VP block from a PS-b-P4VPVP membrane using 1D/2D NMR spectroscopy and other characterization techniques. Surprisingly, the long range ordered surface porosity was preserved even after degradation of the P4VP block from the main chain of the diblock copolymer, as revealed by SEM. Aside from a drastically improved water flux (∼67% increase) compared to the PS-b-P4VP membrane, the hydraulic permeability measurements validated pH independent behaviour of the isoporous PS membrane over a wide pH range from 3 to 10. The effect of the pore size on protein transport rate and selectivity (a) was investigated for lysozyme (Lys), bovine serum albumin (BSA) and globulin-γ (IgG). A high selectivity of 42 (Lys/IgG) and 30 (BSA/IgG) was attained, making the membranes attractive for size selective separation of biomolecules from their synthetic model mixture solutions.

  16. How to reach clients of female sex workers: a survey by surprise in brothels in Dakar, Senegal.

    Science.gov (United States)

    Espirito Santo, M. E. Gomes do; Etheredge, G. D.

    2002-01-01

    OBJECTIVE: To describe the sampling techniques and survey procedures used in identifying male clients who frequent brothels to buy sexual services from female sex workers in Dakar, Senegal, with the aim of measuring the prevalence of human immunodeficiency virus (HIV) infection and investigating related risk behaviours. METHODS: Surveys were conducted in seven brothels in Dakar, Senegal. Clients were identified "by surprise" and interviewed and requested to donate saliva for HIV testing. RESULTS: Of the 1450 clients of prostitutes who were solicited to enter the study, 1140 (79.8%) agreed to be interviewed; 1083 (95%) of these clients provided saliva samples for testing. Of the samples tested, 47 were positive for HIV-1 or HIV-2, giving an HIV prevalence of 4.4%. CONCLUSION: The procedures adopted were successful in reaching the target population. Men present in the brothels could not deny being there, and it proved possible to explain the purpose of the study and to gain their confidence. Collection of saliva samples was shown to be an excellent method for performing HIV testing in difficult field conditions where it is hard to gain access to the population under study. The surveying of prostitution sites is recommended as a means of identifying core groups for HIV infection with a view to targeting education programmes more effectively. In countries such as Senegal, where the prevalence of HIV infection is still low, interventions among commercial sex workers and their clients may substantially delay the onset of a larger epidemic in the general population. PMID:12378288

  17. Large deviations

    CERN Document Server

    Deuschel, Jean-Dominique; Deuschel, Jean-Dominique

    2001-01-01

    This is the second printing of the book first published in 1988. The first four chapters of the volume are based on lectures given by Stroock at MIT in 1987. They form an introduction to the basic ideas of the theory of large deviations and make a suitable package on which to base a semester-length course for advanced graduate students with a strong background in analysis and some probability theory. A large selection of exercises presents important material and many applications. The last two chapters present various non-uniform results (Chapter 5) and outline the analytic approach that allow

  18. The Impact of a Surprise Dividend Increase on a Stocks Performance : the Analysis of Companies Listed on the Warsaw Stock Exchange

    Directory of Open Access Journals (Sweden)

    Tomasz Słoński

    2012-01-01

    Full Text Available The reaction of marginal investors to the announcement of a surprise dividend increase has been measured. Although field research is performed on companies listed on the Warsaw Stock Exchange, the paper has important theoretical implications. Valuation theory gives many clues for the interpretation of changes in dividends. At the start of the literature review, the assumption of the irrelevance of dividends (to investment decisions is described. This assumption is the basis for up-to-date valuation procedures leading to fundamental and fair market valuation of equity (shares. The paper is designed to verify whether the market value of stock is immune to the surprise announcement of a dividend increase. This study of the effect of a surprise dividend increase gives the chance to partially isolate such an event from dividend changes based on long-term expectations. The result of the research explicitly shows that a surprise dividend increase is on average welcomed by investors (an average abnormal return of 2.24% with an associated p-value of 0.001. Abnormal returns are realized by investors when there is a surprise increase in a dividend payout. The subsample of relatively high increases in a dividend payout enables investors to gain a 3.2% return on average. The results show that valuation models should be revised to take into account a possible impact of dividend changes on investors behavior. (original abstract

  19. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert

    numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular......We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto......, on average they move away from numbers that have recently been drawn, as suggested by the “gambler’s fallacy”, and move toward numbers that are on streak, i.e. have been drawn several weeks in a row, consistent with the “hot hand fallacy”....

  20. Invitation to number theory

    CERN Document Server

    Ore, Oystein

    2017-01-01

    Number theory is the branch of mathematics concerned with the counting numbers, 1, 2, 3, … and their multiples and factors. Of particular importance are odd and even numbers, squares and cubes, and prime numbers. But in spite of their simplicity, you will meet a multitude of topics in this book: magic squares, cryptarithms, finding the day of the week for a given date, constructing regular polygons, pythagorean triples, and many more. In this revised edition, John Watkins and Robin Wilson have updated the text to bring it in line with contemporary developments. They have added new material on Fermat's Last Theorem, the role of computers in number theory, and the use of number theory in cryptography, and have made numerous minor changes in the presentation and layout of the text and the exercises.

  1. Experimental determination of Ramsey numbers.

    Science.gov (United States)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  2. The adventure of numbers

    CERN Document Server

    Godefroy, Gilles

    2004-01-01

    Numbers are fascinating. The fascination begins in childhood, when we first learn to count. It continues as we learn arithmetic, algebra, geometry, and so on. Eventually, we learn that numbers not only help us to measure the world, but also to understand it and, to some extent, to control it. In The Adventure of Numbers, Gilles Godefroy follows the thread of our expanding understanding of numbers to lead us through the history of mathematics. His goal is to share the joy of discovering and understanding this great adventure of the mind. The development of mathematics has been punctuated by a n

  3. Predicting Lotto Numbers

    DEFF Research Database (Denmark)

    Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl

    2016-01-01

    We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers...... as formalized in recent behavioral theory. In particular, players tend to bet less on numbers that have been drawn in the preceding week, as suggested by the ‘gambler’s fallacy’, and bet more on a number if it was frequently drawn in the recent past, consistent with the ‘hot-hand fallacy’....

  4. Beurling generalized numbers

    CERN Document Server

    Diamond, Harold G; Cheung, Man Ping

    2016-01-01

    "Generalized numbers" is a multiplicative structure introduced by A. Beurling to study how independent prime number theory is from the additivity of the natural numbers. The results and techniques of this theory apply to other systems having the character of prime numbers and integers; for example, it is used in the study of the prime number theorem (PNT) for ideals of algebraic number fields. Using both analytic and elementary methods, this book presents many old and new theorems, including several of the authors' results, and many examples of extremal behavior of g-number systems. Also, the authors give detailed accounts of the L^2 PNT theorem of J. P. Kahane and of the example created with H. L. Montgomery, showing that additive structure is needed for proving the Riemann hypothesis. Other interesting topics discussed are propositions "equivalent" to the PNT, the role of multiplicative convolution and Chebyshev's prime number formula for g-numbers, and how Beurling theory provides an interpretation of the ...

  5. Intuitive numbers guide decisions

    Directory of Open Access Journals (Sweden)

    Ellen Peters

    2008-12-01

    Full Text Available Measuring reaction times to number comparisons is thought to reveal a processing stage in elementary numerical cognition linked to internal, imprecise representations of number magnitudes. These intuitive representations of the mental number line have been demonstrated across species and human development but have been little explored in decision making. This paper develops and tests hypotheses about the influence of such evolutionarily ancient, intuitive numbers on human decisions. We demonstrate that individuals with more precise mental-number-line representations are higher in numeracy (number skills consistent with previous research with children. Individuals with more precise representations (compared to those with less precise representations also were more likely to choose larger, later amounts over smaller, immediate amounts, particularly with a larger proportional difference between the two monetary outcomes. In addition, they were more likely to choose an option with a larger proportional but smaller absolute difference compared to those with less precise representations. These results are consistent with intuitive number representations underlying: a perceived differences between numbers, b the extent to which proportional differences are weighed in decisions, and, ultimately, c the valuation of decision options. Human decision processes involving numbers important to health and financial matters may be rooted in elementary, biological processes shared with other species.

  6. Numbers, sequences and series

    CERN Document Server

    Hirst, Keith

    1994-01-01

    Number and geometry are the foundations upon which mathematics has been built over some 3000 years. This book is concerned with the logical foundations of number systems from integers to complex numbers. The author has chosen to develop the ideas by illustrating the techniques used throughout mathematics rather than using a self-contained logical treatise. The idea of proof has been emphasised, as has the illustration of concepts from a graphical, numerical and algebraic point of view. Having laid the foundations of the number system, the author has then turned to the analysis of infinite proc

  7. Hyperreal Numbers for Infinite Divergent Series

    OpenAIRE

    Bartlett, Jonathan

    2018-01-01

    Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent serie...

  8. A new in vivo model of pantothenate kinase-associated neurodegeneration reveals a surprising role for transcriptional regulation in pathogenesis.

    Directory of Open Access Journals (Sweden)

    Varun ePandey

    2013-09-01

    Full Text Available Pantothenate Kinase-Associated Neurodegeneration (PKAN is a neurodegenerative disorder with a poorly understood molecular mechanism. It is caused by mutations in Pantothenate Kinase, the first enzyme in the Coenzyme A (CoA biosynthetic pathway. Here, we developed a Drosophila model of PKAN (tim-fbl flies that allows us to continuously monitor the modeled disease in the brain. In tim-fbl flies, downregulation of fumble, the Drosophila PanK homologue in the cells containing a circadian clock results in characteristic features of PKAN such as developmental lethality, hypersensitivity to oxidative stress, and diminished life span. Despite quasi-normal circadian transcriptional rhythms, tim-fbl flies display brain-specific aberrant circadian locomotor rhythms, and a unique transcriptional signature. Comparison with expression data from flies exposed to paraquat demonstrates that, as previously suggested, pathways others than oxidative stress are affected by PANK downregulation. Surprisingly we found a significant decrease in the expression of key components of the photoreceptor recycling pathways, which could lead to retinal degeneration, a hallmark of PKAN. Importantly, these defects are not accompanied by changes in structural components in eye genes suggesting that changes in gene expression in the eye precede and may cause the retinal degeneration. Indeed tim-fbl flies have diminished response to light transitions, and their altered day/night patterns of activity demonstrates defects in light perception. This suggest that retinal lesions are not solely due to oxidative stress and demonstrates a role for the transcriptional response to CoA deficiency underlying the defects observed in dPanK deficient flies. Moreover, in the present study we developed a new fly model that can be applied to other diseases and that allows the assessment of neurodegeneration in the brains of living flies.

  9. Colleges Leverage Large Endowments to Benefit Some Donors and Employees

    Science.gov (United States)

    Hermes, J. J.

    2008-01-01

    College endowments have beaten the market so consistently in recent years, it is not surprising that individuals would like to take advantage of that institutional wisdom to invest their own money. Increasingly, many are. A small but growing number of universities are trying to entice donors to invest their trusts alongside college endowments,…

  10. The roles of large top predators in coastal ecosystems: new insights from long term ecological research

    Science.gov (United States)

    Rosenblatt, Adam E.; Heithaus, Michael R.; Mather, Martha E.; Matich, Philip; Nifong, James C.; Ripple, William J.; Silliman, Brian R.

    2013-01-01

    During recent human history, human activities such as overhunting and habitat destruction have severely impacted many large top predator populations around the world. Studies from a variety of ecosystems show that loss or diminishment of top predator populations can have serious consequences for population and community dynamics and ecosystem stability. However, there are relatively few studies of the roles of large top predators in coastal ecosystems, so that we do not yet completely understand what could happen to coastal areas if large top predators are extirpated or significantly reduced in number. This lack of knowledge is surprising given that coastal areas around the globe are highly valued and densely populated by humans, and thus coastal large top predator populations frequently come into conflict with coastal human populations. This paper reviews what is known about the ecological roles of large top predators in coastal systems and presents a synthesis of recent work from three coastal eastern US Long Term Ecological Research (LTER) sites where long-term studies reveal what appear to be common themes relating to the roles of large top predators in coastal systems. We discuss three specific themes: (1) large top predators acting as mobile links between disparate habitats, (2) large top predators potentially affecting nutrient and biogeochemical dynamics through localized behaviors, and (3) individual specialization of large top predator behaviors. We also discuss how research within the LTER network has led to enhanced understanding of the ecological roles of coastal large top predators. Highlighting this work is intended to encourage further investigation of the roles of large top predators across diverse coastal aquatic habitats and to better inform researchers and ecosystem managers about the importance of large top predators for coastal ecosystem health and stability.

  11. The MIXMAX random number generator

    Science.gov (United States)

    Savvidy, Konstantin G.

    2015-11-01

    In this paper, we study the randomness properties of unimodular matrix random number generators. Under well-known conditions, these discrete-time dynamical systems have the highly desirable K-mixing properties which guarantee high quality random numbers. It is found that some widely used random number generators have poor Kolmogorov entropy and consequently fail in empirical tests of randomness. These tests show that the lowest acceptable value of the Kolmogorov entropy is around 50. Next, we provide a solution to the problem of determining the maximal period of unimodular matrix generators of pseudo-random numbers. We formulate the necessary and sufficient condition to attain the maximum period and present a family of specific generators in the MIXMAX family with superior performance and excellent statistical properties. Finally, we construct three efficient algorithms for operations with the MIXMAX matrix which is a multi-dimensional generalization of the famous cat-map. First, allowing to compute the multiplication by the MIXMAX matrix with O(N) operations. Second, to recursively compute its characteristic polynomial with O(N2) operations, and third, to apply skips of large number of steps S to the sequence in O(N2 log(S)) operations.

  12. Random number generation

    International Nuclear Information System (INIS)

    Coveyou, R.R.

    1974-01-01

    The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)

  13. Algebraic number theory

    CERN Document Server

    Weiss, Edwin

    1998-01-01

    Careful organization and clear, detailed proofs characterize this methodical, self-contained exposition of basic results of classical algebraic number theory from a relatively modem point of view. This volume presents most of the number-theoretic prerequisites for a study of either class field theory (as formulated by Artin and Tate) or the contemporary treatment of analytical questions (as found, for example, in Tate's thesis).Although concerned exclusively with algebraic number fields, this treatment features axiomatic formulations with a considerable range of applications. Modem abstract te

  14. Advanced number theory

    CERN Document Server

    Cohn, Harvey

    1980-01-01

    ""A very stimulating book ... in a class by itself."" - American Mathematical MonthlyAdvanced students, mathematicians and number theorists will welcome this stimulating treatment of advanced number theory, which approaches the complex topic of algebraic number theory from a historical standpoint, taking pains to show the reader how concepts, definitions and theories have evolved during the last two centuries. Moreover, the book abounds with numerical examples and more concrete, specific theorems than are found in most contemporary treatments of the subject.The book is divided into three parts

  15. The emergence of number

    CERN Document Server

    Crossley, John N

    1987-01-01

    This book presents detailed studies of the development of three kinds of number. In the first part the development of the natural numbers from Stone-Age times right up to the present day is examined not only from the point of view of pure history but also taking into account archaeological, anthropological and linguistic evidence. The dramatic change caused by the introduction of logical theories of number in the 19th century is also treated and this part ends with a non-technical account of the very latest developments in the area of Gödel's theorem. The second part is concerned with the deve

  16. Professor Stewart's incredible numbers

    CERN Document Server

    Stewart, Ian

    2015-01-01

    Ian Stewart explores the astonishing properties of numbers from 1 to10 to zero and infinity, including one figure that, if you wrote it out, would span the universe. He looks at every kind of number you can think of - real, imaginary, rational, irrational, positive and negative - along with several you might have thought you couldn't think of. He explains the insights of the ancient mathematicians, shows how numbers have evolved through the ages, and reveals the way numerical theory enables everyday life. Under Professor Stewart's guidance you will discover the mathematics of codes,

  17. Fundamentals of number theory

    CERN Document Server

    LeVeque, William J

    1996-01-01

    This excellent textbook introduces the basics of number theory, incorporating the language of abstract algebra. A knowledge of such algebraic concepts as group, ring, field, and domain is not assumed, however; all terms are defined and examples are given - making the book self-contained in this respect.The author begins with an introductory chapter on number theory and its early history. Subsequent chapters deal with unique factorization and the GCD, quadratic residues, number-theoretic functions and the distribution of primes, sums of squares, quadratic equations and quadratic fields, diopha

  18. Numbers and computers

    CERN Document Server

    Kneusel, Ronald T

    2015-01-01

    This is a book about numbers and how those numbers are represented in and operated on by computers. It is crucial that developers understand this area because the numerical operations allowed by computers, and the limitations of those operations, especially in the area of floating point math, affect virtually everything people try to do with computers. This book aims to fill this gap by exploring, in sufficient but not overwhelming detail, just what it is that computers do with numbers. Divided into two parts, the first deals with standard representations of integers and floating point numb

  19. Elementary theory of numbers

    CERN Document Server

    Sierpinski, Waclaw

    1988-01-01

    Since the publication of the first edition of this work, considerable progress has been made in many of the questions examined. This edition has been updated and enlarged, and the bibliography has been revised.The variety of topics covered here includes divisibility, diophantine equations, prime numbers (especially Mersenne and Fermat primes), the basic arithmetic functions, congruences, the quadratic reciprocity law, expansion of real numbers into decimal fractions, decomposition of integers into sums of powers, some other problems of the additive theory of numbers and the theory of Gaussian

  20. On powerful numbers

    Directory of Open Access Journals (Sweden)

    R. A. Mollin

    1986-01-01

    Full Text Available A powerful number is a positive integer n satisfying the property that p2 divides n whenever the prime p divides n; i.e., in the canonical prime decomposition of n, no prime appears with exponent 1. In [1], S.W. Golomb introduced and studied such numbers. In particular, he asked whether (25,27 is the only pair of consecutive odd powerful numbers. This question was settled in [2] by W.A. Sentance who gave necessary and sufficient conditions for the existence of such pairs. The first result of this paper is to provide a generalization of Sentance's result by giving necessary and sufficient conditions for the existence of pairs of powerful numbers spaced evenly apart. This result leads us naturally to consider integers which are representable as a proper difference of two powerful numbers, i.e. n=p1−p2 where p1 and p2 are powerful numbers with g.c.d. (p1,p2=1. Golomb (op.cit. conjectured that 6 is not a proper difference of two powerful numbers, and that there are infinitely many numbers which cannot be represented as a proper difference of two powerful numbers. The antithesis of this conjecture was proved by W.L. McDaniel [3] who verified that every non-zero integer is in fact a proper difference of two powerful numbers in infinitely many ways. McDaniel's proof is essentially an existence proof. The second result of this paper is a simpler proof of McDaniel's result as well as an effective algorithm (in the proof for explicitly determining infinitely many such representations. However, in both our proof and McDaniel's proof one of the powerful numbers is almost always a perfect square (namely one is always a perfect square when n≢2(mod4. We provide in §2 a proof that all even integers are representable in infinitely many ways as a proper nonsquare difference; i.e., proper difference of two powerful numbers neither of which is a perfect square. This, in conjunction with the odd case in [4], shows that every integer is representable in