WorldWideScience

Sample records for providing large numbers

  1. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  2. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  3. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  4. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  5. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  6. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  7. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  8. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  9. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  10. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  11. 48 CFR 1652.204-74 - Large provider agreements.

    Science.gov (United States)

    2010-10-01

    ... FEDERAL EMPLOYEES HEALTH BENEFITS ACQUISITION REGULATION CLAUSES AND FORMS CONTRACT CLAUSES Texts of FEHBP... Large Provider Agreement; and (ii) Not less than 60 days before exercising a renewal or other option, or... exercising a simple renewal or other option contemplated by a Large Provider Agreement that OPM previously...

  12. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  13. 48 CFR 1604.7201 - FEHB Program Large Provider Agreements.

    Science.gov (United States)

    2010-10-01

    ... FEDERAL EMPLOYEES HEALTH BENEFITS ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Large Provider... into any Large Provider Agreement; and (ii) Not less than 60 days before exercising renewals or other...

  14. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  15. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  16. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  17. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  18. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  19. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  20. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  1. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  2. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  3. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  4. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  5. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  6. Factors Affecting Number of Diabetes Management Activities Provided by Pharmacists.

    Science.gov (United States)

    Lo, Annie; Lorenz, Kathleen; Cor, Ken; Simpson, Scot H

    2016-12-01

    Legislative changes since 2007 have given Alberta pharmacists additional authorizations and new practice settings, which should enhance provision of clinical services to patients. This study examined whether these changes are related to the number of diabetes management activities provided by pharmacists. Cross-sectional surveys of Alberta pharmacists were conducted in 2006 and 2015. Both questionnaires contained 63 diabetes management activities, with response options to indicate how frequently the activity was provided. Respondents were grouped by survey year, practice setting, diabetes-specific training and additional authorizations. The number of diabetes management activities provided often or always were compared among groups by using analysis of variance. Data from 128 pharmacists participating in the 2006 survey were compared with 256 pharmacists participating in the 2015 survey; overall mean age was 41.6 (±10.9) years, 245 (64%) were women, mean duration of practice was 16.1 (±11.8) years, 280 (73%) were community pharmacists, 75 (20%) were certified diabetes educators (CDEs), and 100 (26%) had additional prescribing authorization (APA). Pharmacists provided a mean of 28.7 (95% CI 26.3 to 31.2) diabetes management activities in 2006 and 35.2 (95% CI 33.4-37.0) activities in 2015 (p<0.001). Pharmacists who were CDEs provided significantly more activities compared to other pharmacists (p<0.001). In 2015, working in a primary care network and having APA were also associated with provision of more activities (p<0.05 for both comparisons). Pharmacists provided more diabetes management activities in 2015 than in 2006. The number of diabetes management activities was also associated with being a CDE, working in a primary care network or having APA. Copyright © 2016 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.

  7. Providing cell phone numbers and email addresses to Patients: the physician's perspective

    Science.gov (United States)

    2011-01-01

    Background The provision of cell phone numbers and email addresses enhances the accessibility of medical consultations, but can add to the burden of physicians' routine clinical practice and affect their free time. The objective was to assess the attitudes of physicians to providing their telephone number or email address to patients. Methods Primary care physicians in the southern region of Israel completed a structured questionnaire that related to the study objective. Results The study population included 120 primary care physicians with a mean age of 41.2 ± 8.5, 88 of them women (73.3%). Physicians preferred to provide their cell phone number rather than their email address (P = 0.0007). They preferred to answer their cell phones only during the daytime and at predetermined times, but would answer email most hours of the day, including weekends and holidays (P = 0.001). More physicians (79.7%) would have preferred allotted time for email communication than allotted time for cell phone communication (50%). However, they felt that email communication was more likely to lead to miscommunication than telephone calls (P = 0.0001). There were no differences between male and female physicians on the provision of cell phone numbers or email addresses to patients. Older physicians were more prepared to provide cell phone numbers that younger ones (P = 0.039). Conclusions The attitude of participating physicians was to provide their cell phone number or email address to some of their patients, but most of them preferred to give out their cell phone number. PMID:21426591

  8. Providing cell phone numbers and email addresses to Patients: the physician's perspective

    Directory of Open Access Journals (Sweden)

    Freud Tamar

    2011-03-01

    Full Text Available Abstract Background The provision of cell phone numbers and email addresses enhances the accessibility of medical consultations, but can add to the burden of physicians' routine clinical practice and affect their free time. The objective was to assess the attitudes of physicians to providing their telephone number or email address to patients. Methods Primary care physicians in the southern region of Israel completed a structured questionnaire that related to the study objective. Results The study population included 120 primary care physicians with a mean age of 41.2 ± 8.5, 88 of them women (73.3%. Physicians preferred to provide their cell phone number rather than their email address (P = 0.0007. They preferred to answer their cell phones only during the daytime and at predetermined times, but would answer email most hours of the day, including weekends and holidays (P = 0.001. More physicians (79.7% would have preferred allotted time for email communication than allotted time for cell phone communication (50%. However, they felt that email communication was more likely to lead to miscommunication than telephone calls (P = 0.0001. There were no differences between male and female physicians on the provision of cell phone numbers or email addresses to patients. Older physicians were more prepared to provide cell phone numbers that younger ones (P = 0.039. Conclusions The attitude of participating physicians was to provide their cell phone number or email address to some of their patients, but most of them preferred to give out their cell phone number.

  9. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  10. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  11. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  12. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  13. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  14. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  15. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  16. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  17. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  18. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  20. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  1. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  2. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  3. Administrative and clinical denials by a large dental insurance provider

    Directory of Open Access Journals (Sweden)

    Geraldo Elias MIRANDA

    2015-01-01

    Full Text Available The objective of this study was to assess the prevalence and the type of claim denials (administrative, clinical or both made by a large dental insurance plan. This was a cross-sectional, observational study, which retrospectively collected data from the claims and denial reports of a dental insurance company. The sample consisted of the payment claims submitted by network dentists, based on their procedure reports, reviewed in the third trimester of 2012. The denials were classified and grouped into ‘administrative’, ‘clinical’ or ‘both’. The data were tabulated and submitted to uni- and bivariate analyses. The confidence intervals were 95% and the level of significance was set at 5%. The overall frequency of denials was 8.2% of the total number of procedures performed. The frequency of administrative denials was 72.88%, whereas that of technical denials was 25.95% and that of both, 1.17% (p < 0.05. It was concluded that the overall prevalence of denials in the studied sample was low. Administrative denials were the most prevalent. This type of denial could be reduced if all dental insurance providers had unified clinical and administrative protocols, and if dentists submitted all of the required documentation in accordance with these protocols.

  4. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  5. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  6. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  7. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  8. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  9. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  10. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  11. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  12. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  13. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  14. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  15. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  16. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  17. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  18. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  19. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  20. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  1. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  2. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  3. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  4. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  5. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  6. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  7. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  8. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  9. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  10. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  11. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  12. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  13. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  14. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  15. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  16. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  17. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  18. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  19. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  20. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  1. Hyperreal Numbers for Infinite Divergent Series

    OpenAIRE

    Bartlett, Jonathan

    2018-01-01

    Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent serie...

  2. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  3. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  4. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  5. Service Provider Revenue Dependence of Offered Number of Service Classes

    Directory of Open Access Journals (Sweden)

    V. S. Aćimović-Raspopović

    2011-06-01

    Full Text Available In this paper possible applications of responsive pricing scheme and Stackelberg game for pricing telecommunication services with service provider as a leader and users acting as followers are analyzed. We have classified users according to an elasticity criterion into inelastic, partially elastic and elastic users. Their preferences are modelled through utility functions, which describe users’ sensitivity to changes in the quality of service and price. In the proposed algorithm a bandwidth management server is responsible for performing automatic optimal bandwidth allocation to each user’s session while maximizing its expected utility and the overall service provider’s revenue. The pricing algorithm is used for congestion control and more efficient network capacity utilization. We have analyzed different scenarios of the proposed usage-based pricing algorithm. Particularly, the influence of the number of service classes on price setting in terms of service provider’s revenue and total users’ utility maximization are discussed. The model is verified through numerous simulations performed by software that we have developed for that purpose.

  6. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  7. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  8. Collaborating with a social housing provider supports a large cohort study of the health effects of housing conditions.

    Science.gov (United States)

    Baker, Michael G; Zhang, Jane; Blakely, Tony; Crane, Julian; Saville-Smith, Kay; Howden-Chapman, Philippa

    2016-02-16

    Despite the importance of adequate, un-crowded housing as a prerequisite for good health, few large cohort studies have explored the health effects of housing conditions. The Social Housing Outcomes Worth (SHOW) Study was established to assess the relationship between housing conditions and health, particularly between household crowding and infectious diseases. This paper reports on the methods and feasibility of using a large administrative housing database for epidemiological research and the characteristics of the social housing population. This prospective open cohort study was established in 2003 in collaboration with Housing New Zealand Corporation which provides housing for approximately 5% of the population. The Study measures health outcomes using linked anonymised hospitalisation and mortality records provided by the New Zealand Ministry of Health. It was possible to match the majority (96%) of applicant and tenant household members with their National Health Index (NHI) number allowing linkage to anonymised coded data on their hospitalisations and mortality. By December 2011, the study population consisted of 11,196 applicants and 196,612 tenants. Half were less than 21 years of age. About two-thirds identified as Māori or Pacific ethnicity. Household incomes were low. Of tenant households, 44% containing one or more smokers compared with 33% for New Zealand as a whole. Exposure to household crowding, as measured by a deficit of one or more bedrooms, was common for applicants (52%) and tenants (38%) compared with New Zealanders as whole (10%). This project has shown that an administrative housing database can be used to form a large cohort population and successfully link cohort members to their health records in a way that meets confidentiality and ethical requirements. This study also confirms that social housing tenants are a highly deprived population with relatively low incomes and high levels of exposure to household crowding and environmental

  9. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  10. A large electrically excited synchronous generator

    DEFF Research Database (Denmark)

    2014-01-01

    This invention relates to a large electrically excited synchronous generator (100), comprising a stator (101), and a rotor or rotor coreback (102) comprising an excitation coil (103) generating a magnetic field during use, wherein the rotor or rotor coreback (102) further comprises a plurality...... adjacent neighbouring poles. In this way, a large electrically excited synchronous generator (EESG) is provided that readily enables a relatively large number of poles, compared to a traditional EESG, since the excitation coil in this design provides MMF for all the poles, whereas in a traditional EESG...... each pole needs its own excitation coil, which limits the number of poles as each coil will take up too much space between the poles....

  11. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  12. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  13. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  14. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  16. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  17. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  18. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  19. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  20. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  1. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  2. Collaborating with a social housing provider supports a large cohort study of the health effects of housing conditions

    Directory of Open Access Journals (Sweden)

    Michael G. Baker

    2016-02-01

    Full Text Available Abstract Background Despite the importance of adequate, un-crowded housing as a prerequisite for good health, few large cohort studies have explored the health effects of housing conditions. The Social Housing Outcomes Worth (SHOW Study was established to assess the relationship between housing conditions and health, particularly between household crowding and infectious diseases. This paper reports on the methods and feasibility of using a large administrative housing database for epidemiological research and the characteristics of the social housing population. Methods This prospective open cohort study was established in 2003 in collaboration with Housing New Zealand Corporation which provides housing for approximately 5 % of the population. The Study measures health outcomes using linked anonymised hospitalisation and mortality records provided by the New Zealand Ministry of Health. Results It was possible to match the majority (96 % of applicant and tenant household members with their National Health Index (NHI number allowing linkage to anonymised coded data on their hospitalisations and mortality. By December 2011, the study population consisted of 11,196 applicants and 196,612 tenants. Half were less than 21 years of age. About two-thirds identified as Māori or Pacific ethnicity. Household incomes were low. Of tenant households, 44 % containing one or more smokers compared with 33 % for New Zealand as a whole. Exposure to household crowding, as measured by a deficit of one or more bedrooms, was common for applicants (52 % and tenants (38 % compared with New Zealanders as whole (10 %. Conclusions This project has shown that an administrative housing database can be used to form a large cohort population and successfully link cohort members to their health records in a way that meets confidentiality and ethical requirements. This study also confirms that social housing tenants are a highly deprived population with relatively low

  3. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  4. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Science.gov (United States)

    2010-10-01

    ... require software but not hardware changes to provide portability (“Hardware Capable Switches”), within 60... queries, so that they can deliver calls from their networks to any party that has retained its number after switching from one telecommunications carrier to another. (c) [Reserved] (d) In the event a...

  5. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  6. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  7. Talking probabilities: communicating probalistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to provide

  8. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  9. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  10. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  11. Contralateral delay activity provides a neural measure of the number of representations in visual working memory.

    Science.gov (United States)

    Ikkai, Akiko; McCollough, Andrew W; Vogel, Edward K

    2010-04-01

    Visual working memory (VWM) helps to temporarily represent information from the visual environment and is severely limited in capacity. Recent work has linked various forms of neural activity to the ongoing representations in VWM. One piece of evidence comes from human event-related potential studies, which find a sustained contralateral negativity during the retention period of VWM tasks. This contralateral delay activity (CDA) has previously been shown to increase in amplitude as the number of memory items increases, up to the individual's working memory capacity limit. However, significant alternative hypotheses remain regarding the true nature of this activity. Here we test whether the CDA is modulated by the perceptual requirements of the memory items as well as whether it is determined by the number of locations that are being attended within the display. Our results provide evidence against these two alternative accounts and instead strongly support the interpretation that this activity reflects the current number of objects that are being represented in VWM.

  12. Effective field theories in the large-N limit

    International Nuclear Information System (INIS)

    Weinberg, S.

    1997-01-01

    Various effective field theories in four dimensions are shown to have exact nontrivial solutions in the limit as the number N of fields of some type becomes large. These include extended versions of the U (N) Gross-Neveu model, the nonlinear O(N) σ model, and the CP N-1 model. Although these models are not renormalizable in the usual sense, the infinite number of coupling types allows a complete cancellation of infinities. These models provide qualitative predictions of the form of scattering amplitudes for arbitrary momenta, but because of the infinite number of free parameters, it is possible to derive quantitative predictions only in the limit of small momenta. For small momenta the large-N limit provides only a modest simplification, removing at most a finite number of diagrams to each order in momenta, except near phase transitions, where it reduces the infinite number of diagrams that contribute for low momenta to a finite number. copyright 1997 The American Physical Society

  13. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  14. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  15. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  16. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  17. Can genetic estimators provide robust estimates of the effective number of breeders in small populations?

    Directory of Open Access Journals (Sweden)

    Marion Hoehn

    Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.

  18. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  19. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  20. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  1. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  2. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  3. On random number generators providing convergence more rapid than 1/√N

    International Nuclear Information System (INIS)

    Belov, V.A.

    1982-01-01

    To realize the simulation of processes in High Energy Physics a practical test of the efficiency in applying quasirandom numbers to check multiple integration with Monte-Karlo method is presented together with the comparison of the wellknown generators of quasirandom and pseudorandom numbers [ru

  4. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  5. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  6. Early Peritonitis in a Large Peritoneal Dialysis Provider System in Colombia.

    Science.gov (United States)

    Vargas, Edgar; Blake, Peter G; Sanabria, Mauricio; Bunch, Alfonso; López, Patricia; Vesga, Jasmín; Buitrago, Alberto; Astudillo, Kindar; Devia, Martha; Sánchez, Ricardo

    ♦ BACKGROUND: Peritonitis is the most important complication of peritoneal dialysis (PD), and early peritonitis rate is predictive of the subsequent course on PD. Our aim was to calculate the early peritonitis rate and to identify characteristics and predisposing factors in a large nationwide PD provider network in Colombia. ♦ METHODS: This was a historical observational cohort study of all adult patients starting PD between January 1, 2012, and December 31, 2013, in 49 renal facilities in the Renal Therapy Services in Colombia. We studied the peritonitis rate in the first 90 days of treatment, its causative micro-organisms, its predictors and its variation with time on PD and between individual facilities. ♦ RESULTS: A total of 3,525 patients initiated PD, with 176 episodes of peritonitis during 752 patient-years of follow-up for a rate of 0.23 episodes per patient year equivalent to 1 every 52 months. In 41 of 49 units, the rate was better than 1 per 33 months, and in 45, it was better than 1 per 24 months. Peritonitis rates did not differ with age, ethnicity, socioeconomic status, or PD modality. We identified high incidence risk periods at 2 to 5 weeks after initiation of PD and again at 10 to 12 weeks. ♦ CONCLUSION: An excellent peritonitis rate was achieved across a large nationwide network. This occurred in the context of high nationwide PD utilization and despite high rates of socioeconomic deprivation. We propose that a key factor in achieving this was a standardized approach to management of patients. Copyright © 2017 International Society for Peritoneal Dialysis.

  7. The impact of new forms of large-scale general practice provider collaborations on England's NHS: a systematic review.

    Science.gov (United States)

    Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel

    2018-03-01

    Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.

  8. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  9. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  10. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  11. The future of large old trees in urban landscapes.

    Science.gov (United States)

    Le Roux, Darren S; Ikin, Karen; Lindenmayer, David B; Manning, Adrian D; Gibbons, Philip

    2014-01-01

    Large old trees are disproportionate providers of structural elements (e.g. hollows, coarse woody debris), which are crucial habitat resources for many species. The decline of large old trees in modified landscapes is of global conservation concern. Once large old trees are removed, they are difficult to replace in the short term due to typically prolonged time periods needed for trees to mature (i.e. centuries). Few studies have investigated the decline of large old trees in urban landscapes. Using a simulation model, we predicted the future availability of native hollow-bearing trees (a surrogate for large old trees) in an expanding city in southeastern Australia. In urban greenspace, we predicted that the number of hollow-bearing trees is likely to decline by 87% over 300 years under existing management practices. Under a worst case scenario, hollow-bearing trees may be completely lost within 115 years. Conversely, we predicted that the number of hollow-bearing trees will likely remain stable in semi-natural nature reserves. Sensitivity analysis revealed that the number of hollow-bearing trees perpetuated in urban greenspace over the long term is most sensitive to the: (1) maximum standing life of trees; (2) number of regenerating seedlings ha(-1); and (3) rate of hollow formation. We tested the efficacy of alternative urban management strategies and found that the only way to arrest the decline of large old trees requires a collective management strategy that ensures: (1) trees remain standing for at least 40% longer than currently tolerated lifespans; (2) the number of seedlings established is increased by at least 60%; and (3) the formation of habitat structures provided by large old trees is accelerated by at least 30% (e.g. artificial structures) to compensate for short term deficits in habitat resources. Immediate implementation of these recommendations is needed to avert long term risk to urban biodiversity.

  12. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  13. Workflow management in large distributed systems

    International Nuclear Information System (INIS)

    Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C

    2011-01-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  14. Workflow management in large distributed systems

    Science.gov (United States)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  15. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  16. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  17. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  18. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  19. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  20. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  1. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  2. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  3. The MIXMAX random number generator

    Science.gov (United States)

    Savvidy, Konstantin G.

    2015-11-01

    In this paper, we study the randomness properties of unimodular matrix random number generators. Under well-known conditions, these discrete-time dynamical systems have the highly desirable K-mixing properties which guarantee high quality random numbers. It is found that some widely used random number generators have poor Kolmogorov entropy and consequently fail in empirical tests of randomness. These tests show that the lowest acceptable value of the Kolmogorov entropy is around 50. Next, we provide a solution to the problem of determining the maximal period of unimodular matrix generators of pseudo-random numbers. We formulate the necessary and sufficient condition to attain the maximum period and present a family of specific generators in the MIXMAX family with superior performance and excellent statistical properties. Finally, we construct three efficient algorithms for operations with the MIXMAX matrix which is a multi-dimensional generalization of the famous cat-map. First, allowing to compute the multiplication by the MIXMAX matrix with O(N) operations. Second, to recursively compute its characteristic polynomial with O(N2) operations, and third, to apply skips of large number of steps S to the sequence in O(N2 log(S)) operations.

  4. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  5. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  6. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  7. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  8. CERN experiment provides first glimpse inside cold antihydrogen

    CERN Multimedia

    2002-01-01

    "The ATRAP experiment at the Antiproton Decelerator at CERN has detected and measured large numbers of cold antihydrogen atoms. Relying on ionization of the cold antiatoms when they pass through a strong electric field gradient, the ATRAP measurement provides the first glimpse inside an antiatom, and the first information about the physics of antihydrogen. The results have been accepted for publication in Physical Review Letters" (1 page).

  9. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  10. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  11. Observer variability in estimating numbers: An experiment

    Science.gov (United States)

    Erwin, R.M.

    1982-01-01

    Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.

  12. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  13. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  14. Determination of the critical Shields number for particle erosion in laminar flow

    OpenAIRE

    Ouriemi , Malika; Aussillous , Pascale; Medale , Marc; Peysson , Yannick; Guazzelli , Élisabeth

    2007-01-01

    International audience; We present reproducible experimental measurements for the onset of grain motion in laminar flow and find a constant critical Shields number for particle erosion, i.e., c = 0.12± 0.03, over a large range of small particle Reynolds number: 1.5 10 −5 Re p 0.76. Comparison with previous studies found in the literature is provided.

  15. YBYRÁ facilitates comparison of large phylogenetic trees.

    Science.gov (United States)

    Machado, Denis Jacob

    2015-07-01

    The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .

  16. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  17. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  18. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  19. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  20. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  1. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  2. Security and VO management capabilities in a large-scale Grid operating system

    OpenAIRE

    Aziz, Benjamin; Sporea, Ioana

    2014-01-01

    This paper presents a number of security and VO management capabilities in a large-scale distributed Grid operating system. The capabilities formed the basis of the design and implementation of a number of security and VO management services in the system. The main aim of the paper is to provide some idea of the various functionality cases that need to be considered when designing similar large-scale systems in the future.

  3. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  4. Human behaviour can trigger large carnivore attacks in developed countries.

    Science.gov (United States)

    Penteriani, Vincenzo; Delgado, María del Mar; Pinchera, Francesco; Naves, Javier; Fernández-Gil, Alberto; Kojola, Ilpo; Härkönen, Sauli; Norberg, Harri; Frank, Jens; Fedriani, José María; Sahlén, Veronica; Støen, Ole-Gunnar; Swenson, Jon E; Wabakken, Petter; Pellegrini, Mario; Herrero, Stephen; López-Bao, José Vicente

    2016-02-03

    The media and scientific literature are increasingly reporting an escalation of large carnivore attacks on humans in North America and Europe. Although rare compared to human fatalities by other wildlife, the media often overplay large carnivore attacks on humans, causing increased fear and negative attitudes towards coexisting with and conserving these species. Although large carnivore populations are generally increasing in developed countries, increased numbers are not solely responsible for the observed rise in the number of attacks by large carnivores. Here we show that an increasing number of people are involved in outdoor activities and, when doing so, some people engage in risk-enhancing behaviour that can increase the probability of a risky encounter and a potential attack. About half of the well-documented reported attacks have involved risk-enhancing human behaviours, the most common of which is leaving children unattended. Our study provides unique insight into the causes, and as a result the prevention, of large carnivore attacks on people. Prevention and information that can encourage appropriate human behaviour when sharing the landscape with large carnivores are of paramount importance to reduce both potentially fatal human-carnivore encounters and their consequences to large carnivores.

  5. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  7. Light U(1) gauge boson coupled to baryon number

    International Nuclear Information System (INIS)

    Carone, C.D.; Murayama, Hitoshi

    1995-06-01

    The authors discuss the phenomenology of a light U(1) gauge boson, γ B , that couples only to baryon number. Gauging baryon number at high energies can prevent dangerous baryon-number violating operators that may be generated by Planck scale physics. However, they assume at low energies that the new U(1) gauge symmetry is spontaneously broken and that the γ B mass m B is smaller than m z . They show for m Υ B z that the γB coupling α B can be as large as ∼ 0.1 without conflicting with the current experimental constraints. The authors argue that α B ∼ 0.1 is large enough to produce visible collider signatures and that evidence for the γ B could be hidden in existing LEP data. They show that there are realistic models in which mixing between the γ B and the electroweak gauge bosons occurs only as a radiative effect and does not lead to conflict with precision electroweak measurements. Such mixing may nevertheless provide a leptonic signal for models of this type at an upgraded Tevatron

  8. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  9. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  10. Baryon number fluctuations in quasi-particle model

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ameng [Southeast University Chengxian College, Department of Foundation, Nanjing (China); Luo, Xiaofeng [Central China Normal University, Key Laboratory of Quark and Lepton Physics (MOE), Institute of Particle Physics, Wuhan (China); Zong, Hongshi [Nanjing University, Department of Physics, Nanjing (China); Joint Center for Particle, Nuclear Physics and Cosmology, Nanjing (China); Institute of Theoretical Physics, CAS, State Key Laboratory of Theoretical Physics, Beijing (China)

    2017-04-15

    Baryon number fluctuations are sensitive to the QCD phase transition and the QCD critical point. According to the Feynman rules of finite-temperature field theory, we calculated various order moments and cumulants of the baryon number distributions in the quasi-particle model of the quark-gluon plasma. Furthermore, we compared our results with the experimental data measured by the STAR experiment at RHIC. It is found that the experimental data can be well described by the model for the colliding energies above 30 GeV and show large discrepancies at low energies. This puts a new constraint on the qQGP model and also provides a baseline for the QCD critical point search in heavy-ion collisions at low energies. (orig.)

  11. Large-scale analysis of phosphorylation site occupancy in eukaryotic proteins

    DEFF Research Database (Denmark)

    Rao, R Shyama Prasad; Møller, Ian Max

    2012-01-01

    in proteins is currently lacking. We have therefore analyzed the occurrence and occupancy of phosphorylated sites (~ 100,281) in a large set of eukaryotic proteins (~ 22,995). Phosphorylation probability was found to be much higher in both the  termini of protein sequences and this is much pronounced...... maximum randomness. An analysis of phosphorylation motifs indicated that just 40 motifs and a much lower number of associated kinases might account for nearly 50% of the known phosphorylations in eukaryotic proteins. Our results provide a broad picture of the phosphorylation sites in eukaryotic proteins.......Many recent high throughput technologies have enabled large-scale discoveries of new phosphorylation sites and phosphoproteins. Although they have provided a number of insights into protein phosphorylation and the related processes, an inclusive analysis on the nature of phosphorylated sites...

  12. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  13. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  14. Systematic control of large computer programs

    International Nuclear Information System (INIS)

    Goedbloed, J.P.; Klieb, L.

    1986-07-01

    A package of CCL, UPDATE, and FORTRAN procedures is described which facilitates the systematic control and development of large scientific computer programs. The package provides a general tool box for this purpose which contains many conveniences for the systematic administration of files, editing, reformating of line printer output files, etc. In addition, a small number of procedures is devoted to the problem of structured development of a large computer program which is used by a group of scientists. The essence of the method is contained in three procedures N, R, and X for the creation of a new UPDATE program library, its revision, and execution, resp., and a procedure REVISE which provides a joint editor - UPDATE session which combines the advantages of the two systems, viz. speed and rigor. (Auth.)

  15. Investor reaction to strategic emphasis on earnings numbers: An empirical study

    Directory of Open Access Journals (Sweden)

    M. Shibley Sadique

    2013-10-01

    Full Text Available We analyze the earnings information and stock prices of S&P500 firms and find that investors following S&P500 stocks (i respond more to pro forma earnings than to GAAP earnings, (ii respond to an emphasis on pro forma earnings, and (iii are fixated on pro forma earnings. We provide the first direct evidence that a strategic emphasis on earnings numbers may affect return volatility. Further, our results do not support the argument that a larger investor response to Street earnings might be driven by large differences between the Street numbers and GAAP numbers.

  16. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  17. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  18. Hupa Numbers.

    Science.gov (United States)

    Bennett, Ruth, Ed.; And Others

    An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…

  19. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  20. The Limits and Possibilities of International Large-Scale Assessments. Education Policy Brief. Volume 9, Number 2, Spring 2011

    Science.gov (United States)

    Rutkowski, David J.; Prusinski, Ellen L.

    2011-01-01

    The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…

  1. PROVIDING WOMEN, KEPT MEN

    Science.gov (United States)

    Mojola, Sanyu A

    2014-01-01

    This paper draws on ethnographic and interview based fieldwork to explore accounts of intimate relationships between widowed women and poor young men that emerged in the wake of economic crisis and a devastating HIV epidemic among the Luo ethnic group in Western Kenya. I show how the cooptation of widow inheritance practices in the wake of an overwhelming number of widows as well as economic crisis resulted in widows becoming providing women and poor young men becoming kept men. I illustrate how widows in this setting, by performing a set of practices central to what it meant to be a man in this society – pursuing and providing for their partners - were effectively doing masculinity. I will also show how young men, rather than being feminized by being kept, deployed other sets of practices to prove their masculinity and live in a manner congruent with cultural ideals. I argue that ultimately, women’s practice of masculinity in large part seemed to serve patriarchal ends. It not only facilitated the fulfillment of patriarchal expectations of femininity – to being inherited – but also served, in the end, to provide a material base for young men’s deployment of legitimizing and culturally valued sets of masculine practice. PMID:25489121

  2. Interaction between numbers and size during visual search

    OpenAIRE

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2016-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numeric...

  3. Photon number projection using non-number-resolving detectors

    International Nuclear Information System (INIS)

    Rohde, Peter P; Webb, James G; Huntington, Elanor H; Ralph, Timothy C

    2007-01-01

    Number-resolving photo-detection is necessary for many quantum optics experiments, especially in the application of entangled state preparation. Several schemes have been proposed for approximating number-resolving photo-detection using non-number-resolving detectors. Such techniques include multi-port detection and time-division multiplexing. We provide a detailed analysis and comparison of different number-resolving detection schemes, with a view to creating a useful reference for experimentalists. We show that the ideal architecture for projective measurements is a function of the detector's dark count and efficiency parameters. We also describe a process for selecting an appropriate topology given actual experimental component parameters

  4. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  5. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  6. Large spin systematics in CFT

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F.; Bissi, Agnese; Łukowski, Tomasz [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter,Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2015-11-16

    Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new “reciprocity' principle for structure constants. We argue that these results extend also to non-conformal theories.

  7. Large spin systematics in CFT

    International Nuclear Information System (INIS)

    Alday, Luis F.; Bissi, Agnese; Łukowski, Tomasz

    2015-01-01

    Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new “reciprocity' principle for structure constants. We argue that these results extend also to non-conformal theories.

  8. A Condition Number for Non-Rigid Shape Matching

    KAUST Repository

    Ovsjanikov, Maks

    2011-08-01

    © 2011 The Author(s). Despite the large amount of work devoted in recent years to the problem of non-rigid shape matching, practical methods that can successfully be used for arbitrary pairs of shapes remain elusive. In this paper, we study the hardness of the problem of shape matching, and introduce the notion of the shape condition number, which captures the intuition that some shapes are inherently more difficult to match against than others. In particular, we make a connection between the symmetry of a given shape and the stability of any method used to match it while optimizing a given distortion measure. We analyze two commonly used classes of methods in deformable shape matching, and show that the stability of both types of techniques can be captured by the appropriate notion of a condition number. We also provide a practical way to estimate the shape condition number and show how it can be used to guide the selection of landmark correspondences between shapes. Thus we shed some light on the reasons why general shape matching remains difficult and provide a way to detect and mitigate such difficulties in practice.

  9. The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane.

    Science.gov (United States)

    Hesse, Philipp Nikolaus; Bremmer, Frank

    2017-01-01

    The existence of an association between numbers and space is known for a long time. The most prominent demonstration of this relationship is the spatial numerical association of response codes (SNARC) effect, describing the fact that participants' reaction times are shorter with the left hand for small numbers and with the right hand for large numbers, when being asked to judge the parity of a number (Dehaene et al., J. Exp. Psychol., 122, 371-396, 1993). The SNARC effect is commonly seen as support for the concept of a mental number line, i.e. a mentally conceived line where small numbers are represented more on the left and large numbers are represented more on the right. The SNARC effect has been demonstrated for all three cardinal axes and recently a transverse SNARC plane has been reported (Chen et al., Exp. Brain Res., 233(5), 1519-1528, 2015). Here, by employing saccadic responses induced by auditory or visual stimuli, we measured the SNARC effect within the same subjects along the horizontal (HM) and vertical meridian (VM) and along the two interspersed diagonals. We found a SNARC effect along HM and VM, which allowed predicting the occurrence of a SNARC effect along the two diagonals by means of linear regression. Importantly, significant differences in SNARC strength were found between modalities. Our results suggest the existence of a frontoparallel mental number plane, where small numbers are represented left and down, while large numbers are represented right and up. Together with the recently described transverse mental number plane our findings provide further evidence for the existence of a three-dimensional mental number space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The large-s field-reversed configuration experiment

    International Nuclear Information System (INIS)

    Hoffman, A.L.; Carey, L.N.; Crawford, E.A.; Harding, D.G.; DeHart, T.E.; McDonald, K.F.; McNeil, J.L.; Milroy, R.D.; Slough, J.T.; Maqueda, R.; Wurden, G.A.

    1993-01-01

    The Large-s Experiment (LSX) was built to study the formation and equilibrium properties of field-reversed configurations (FRCs) as the scale size increases. The dynamic, field-reversed theta-pinch method of FRC creation produces axial and azimuthal deformations and makes formation difficult, especially in large devices with large s (number of internal gyroradii) where it is difficult to achieve initial plasma uniformity. However, with the proper technique, these formation distortions can be minimized and are then observed to decay with time. This suggests that the basic stability and robustness of FRCs formed, and in some cases translated, in smaller devices may also characterize larger FRCs. Elaborate formation controls were included on LSX to provide the initial uniformity and symmetry necessary to minimize formation disturbances, and stable FRCs could be formed up to the design goal of s = 8. For x ≤ 4, the formation distortions decayed away completely, resulting in symmetric equilibrium FRCs with record confinement times up to 0.5 ms, agreeing with previous empirical scaling laws (τ∝sR). Above s = 4, reasonably long-lived (up to 0.3 ms) configurations could still be formed, but the initial formation distortions were so large that they never completely decayed away, and the equilibrium confinement was degraded from the empirical expectations. The LSX was only operational for 1 yr, and it is not known whether s = 4 represents a fundamental limit for good confinement in simple (no ion beam stabilization) FRCs or whether it simply reflects a limit of present formation technology. Ideally, s could be increased through flux buildup from neutral beams. Since the addition of kinetic or beam ions will probably be desirable for heating, sustainment, and further stabilization of magnetohydrodynamic modes at reactor-level s values, neutral beam injection is the next logical step in FRC development. 24 refs., 21 figs., 2 tabs

  11. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  12. Act on Numbers: Numerical Magnitude Influences Selection and Kinematics of Finger Movement

    Directory of Open Access Journals (Sweden)

    Rosa Rugani

    2017-08-01

    Full Text Available In the past decade hand kinematics has been reliably adopted for investigating cognitive processes and disentangling debated topics. One of the most controversial issues in numerical cognition literature regards the origin – cultural vs. genetically driven – of the mental number line (MNL, oriented from left (small numbers to right (large numbers. To date, the majority of studies have investigated this effect by means of response times, whereas studies considering more culturally unbiased measures such as kinematic parameters are rare. Here, we present a new paradigm that combines a “free response” task with the kinematic analysis of movement. Participants were seated in front of two little soccer goals placed on a table, one on the left and one on the right side. They were presented with left- or right-directed arrows and they were instructed to kick a small ball with their right index toward the goal indicated by the arrow. In a few test trials participants were presented also with a small (2 or a large (8 number, and they were allowed to choose the kicking direction. Participants performed more left responses with the small number and more right responses with the large number. The whole kicking movement was segmented in two temporal phases in order to make a hand kinematics’ fine-grained analysis. The Kick Preparation and Kick Finalization phases were selected on the basis of peak trajectory deviation from the virtual midline between the two goals. Results show an effect of both small and large numbers on action execution timing. Participants were faster to finalize the action when responding to small numbers toward the left and to large number toward the right. Here, we provide the first experimental demonstration which highlights how numerical processing affects action execution in a new and not-overlearned context. The employment of this innovative and unbiased paradigm will permit to disentangle the role of nature and culture

  13. Research and Teaching: Exploring the Use of an Online Quiz Game to Provide Formative Feedback in a Large-Enrollment, Introductory Biochemistry Course

    Science.gov (United States)

    Milner, Rachel; Parrish, Jonathan; Wright, Adrienne; Gnarpe, Judy; Keenan, Louanne

    2015-01-01

    In a large-enrollment, introductory biochemistry course for nonmajors, the authors provide students with formative feedback through practice questions in PDF format. Recently, they investigated possible benefits of providing the practice questions via an online game (Brainspan). Participants were randomly assigned to either the online game group…

  14. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  15. Low-Reynolds Number Effects in Ventilated Rooms

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.; Topp, Claus

    In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number.......In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number....

  16. Finite-Reynolds-number effects in turbulence using logarithmic expansions

    International Nuclear Information System (INIS)

    Sreenivasan, K.R.; Bershadskii, A.

    2006-12-01

    Experimental or numerical data in turbulence are invariably obtained at finite Reynolds numbers whereas theories of turbulence correspond to infinitely large Reynolds numbers. A proper merger of the two approaches is possible only if corrections for finite Reynolds numbers can be quantified. This paper heuristically considers examples in two classes of finite-Reynolds-number effects. Expansions in terms of logarithms of appropriate variables are shown to yield results in agreement with experimental and numerical data in the following instances: the third-order structure function in isotropic turbulence, the mixed-order structure function for the passive scalar and the Reynolds shear stress around its maximum point. Results suggestive of expansions in terms of the inverse logarithm of the Reynolds number, also motivated by experimental data, concern the tendency for turbulent structures to cluster along a line of observation and (more speculatively) for the longitudinal velocity derivative to become singular at some finite Reynolds number. We suggest an elementary hydrodynamical process that may provide a physical basis for the expansions considered here, but note that the formal justification remains tantalizingly unclear. (author)

  17. Gauge transformations with fractional winding numbers

    International Nuclear Information System (INIS)

    Abouelsaood, A.

    1996-01-01

    The role which gauge transformations of noninteger winding numbers might play in non-Abelian gauge theories is studied. The phase factor acquired by the semiclassical physical states in an arbitrary background gauge field when they undergo a gauge transformation of an arbitrary real winding number is calculated in the path integral formalism assuming that a θFF term added to the Lagrangian plays the same role as in the case of integer winding numbers. Requiring that these states provide a representation of the group of open-quote open-quote large close-quote close-quote gauge transformations, a condition on the allowed backgrounds is obtained. It is shown that this representability condition is only satisfied in the monopole sector of a spontaneously broken gauge theory, but not in the vacuum sector of an unbroken or a spontaneously broken non-Abelian gauge theory. It is further shown that the recent proof of the vanishing of the θ parameter when gauge transformations of arbitrary fractional winding numbers are allowed breaks down in precisely those cases where the representability condition is obeyed because certain gauge transformations needed for the proof, and whose existence is assumed, are either spontaneously broken or cannot be globally defined as a result of a topological obstruction. copyright 1996 The American Physical Society

  18. Outpatient provider concentration and commercial colonoscopy prices.

    Science.gov (United States)

    Pozen, Alexis

    2015-01-01

    The objective was to evaluate the magnitude of various contributors to outpatient commercial colonoscopy prices, including market- and provider-level factors, especially market share. We used adjudicated fee-for-service facility claims from a large commercial insurer for colonoscopies occurring in hospital outpatient department or ambulatory surgery center from October 2005 to December 2012. Claims were matched to provider- and market-level data. Linear fixed effects regressions of negotiated colonoscopy price were run on provider, system, and market characteristics. Markets were defined as counties. There were 178,433 claims from 169 providers (104 systems). The mean system market share was 76% (SD = 0.34) and the mean real (deflated) price was US$1363 (SD = 374), ranging from US$169 to US$2748. For every percentage point increase in a system or individual facility's bed share, relative price increased by 2 to 4 percentage points; this result was stable across a number of specifications. Market population and price were also consistently positively related, though this relation was small in magnitude. No other factor explained price as strongly as market share. Price variation for colonoscopy was driven primarily by market share, of particular concern as the number of mergers increases in wake of the recession and the Affordable Care Act. Whether variation is justified by better quality care requires further research to determine whether quality is subsumed in prices. © The Author(s) 2015.

  19. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  20. Microsoft Support Phone Number +1-877-353-1149(toll-free) Microsoft Helpline Phone Number

    OpenAIRE

    Allina willson

    2018-01-01

    Microsoft Helpline Phone Number Call Now: +1-877-353-1149 for Microsoft support and services. This is Trusted Microsoft Support number provide instant support.if you get any problem while using Microsoft office just relaxed because Microsoft support phone number +1-877-353-1149 is here to provide instant help of microsoft issues. Just dial our Microsoft support phone number +1-877-353-1149 and get instant online support.

  1. Number to finger mapping is topological.

    NARCIS (Netherlands)

    Plaisier, M.A.; Smeets, J.B.J.

    2011-01-01

    It has been shown that humans associate fingers with numbers because finger counting strategies interact with numerical judgements. At the same time, there is evidence that there is a relation between number magnitude and space as small to large numbers seem to be represented from left to right. In

  2. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  3. The algebras of large N matrix mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  4. Small-scale dynamo at low magnetic Prandtl numbers

    Science.gov (United States)

    Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S.

    2012-12-01

    The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓϑ, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm(1-ϑ)/(1+ϑ). We furthermore discuss the critical magnetic Reynolds number Rmcrit, which is required for small-scale dynamo action. The value of Rmcrit is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rmcrit provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.

  5. Small-scale dynamo at low magnetic Prandtl numbers.

    Science.gov (United States)

    Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S

    2012-12-01

    The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓ^{ϑ}, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm^{(1-ϑ)/(1+ϑ)}. We furthermore discuss the critical magnetic Reynolds number Rm_{crit}, which is required for small-scale dynamo action. The value of Rm_{crit} is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rm_{crit} provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.

  6. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  7. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  8. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  9. Optimizing the number of steps in learning tasks for complex skills.

    Science.gov (United States)

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  10. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  11. Enhancement of phase space density by increasing trap anisotropy in a magneto-optical trap with a large number of atoms

    International Nuclear Information System (INIS)

    Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.

    2004-01-01

    The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms

  12. Summing large-N towers in colour flow evolution

    International Nuclear Information System (INIS)

    Plaetzer, Simon

    2013-12-01

    We consider soft gluon evolution in the colour flow basis. We give explicit expressions for the colour structure of the (one-loop) soft anomalous dimension matrix for an arbitrary number of partons, and show how the successive exponentiation of classes of large-N contributions can be achieved to provide a systematic expansion of the evolution in terms of colour supressed contributions.

  13. Number of detectable kaon decays at LAMPF II

    International Nuclear Information System (INIS)

    Sanford, T.W.L.

    1982-04-01

    The maximum number of kaon decays detectable at LAMPF II is estimated for both in-flight and stopping decays. Under reasonable assumptions, the momentum of the kaon beam that optimizes the decay yield occurs at about 6 GeV/c and 600 MeV/c for in-flight and stopping decays, respectively. K + decay yields are fo the order of 7 x 10 7 per 10 14 interacting with K - yields being typically 5 times less. By measuring decays from such beams, a statistical limit of 10 -15 on a branching ratio to a particular channel can be placed in a 100-day run. The large number of kaon decays available at LAMPF II thus provides a powerful tool for sensitively examining rare-decay processes of the kaon

  14. Identifying a Superfluid Reynolds Number via Dynamical Similarity.

    Science.gov (United States)

    Reeves, M T; Billam, T P; Anderson, B P; Bradley, A S

    2015-04-17

    The Reynolds number provides a characterization of the transition to turbulent flow, with wide application in classical fluid dynamics. Identifying such a parameter in superfluid systems is challenging due to their fundamentally inviscid nature. Performing a systematic study of superfluid cylinder wakes in two dimensions, we observe dynamical similarity of the frequency of vortex shedding by a cylindrical obstacle. The universality of the turbulent wake dynamics is revealed by expressing shedding frequencies in terms of an appropriately defined superfluid Reynolds number, Re(s), that accounts for the breakdown of superfluid flow through quantum vortex shedding. For large obstacles, the dimensionless shedding frequency exhibits a universal form that is well-fitted by a classical empirical relation. In this regime the transition to turbulence occurs at Re(s)≈0.7, irrespective of obstacle width.

  15. Number-unconstrained quantum sensing

    Science.gov (United States)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  16. Therapy Provider Phase Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Therapy Provider Phase Information dataset is a tool for providers to search by their National Provider Identifier (NPI) number to determine their phase for...

  17. A METHOD AND AN APPARATUS FOR PROVIDING TIMING SIGNALS TO A NUMBER OF CIRCUITS, AN INTEGRATED CIRCUIT AND A NODE

    DEFF Research Database (Denmark)

    2006-01-01

    A method of providing or transporting a timing signal between a number of circuits, electrical or optical, where each circuit is fed by a node. The nodes forward timing signals between each other, and at least one node is adapted to not transmit a timing signal before having received a timing...... signal from at least two nodes. In this manner, the direction of the timing skew between nodes and circuits is known and data transport between the circuits made easier....

  18. Robust simulations of viscoelastic flows at high Weissenberg numbers with the streamfunction/log-conformation formulation

    DEFF Research Database (Denmark)

    Comminal, Raphaël; Spangenberg, Jon; Hattel, Jesper Henri

    2015-01-01

    potential of the velocity field, and provides a pressureless formulation of the conservation laws, which automatically enforces the incompressibility. The resulting numerical method is free from velocity-pressure decoupling errors, and can achieve stable calculations for large Courant numbers, which improve...

  19. Visuospatial Priming of the Mental Number Line

    Science.gov (United States)

    Stoianov, Ivilin; Kramer, Peter; Umilta, Carlo; Zorzi, Marco

    2008-01-01

    It has been argued that numbers are spatially organized along a "mental number line" that facilitates left-hand responses to small numbers, and right-hand responses to large numbers. We hypothesized that whenever the representations of visual and numerical space are concurrently activated, interactions can occur between them, before response…

  20. Number theory

    CERN Document Server

    Andrews, George E

    1994-01-01

    Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl

  1. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  2. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  3. Tools for the automation of large control systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit – SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting in real-time to changes in the system, thus providing for the automation of standard procedures and the for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  4. LHCb: Managing Large Data Productions in LHCb

    CERN Multimedia

    Tsaregorodtsev, A

    2009-01-01

    LHC experiments are producing very large volumes of data either accumulated from the detectors or generated via the Monte-Carlo modeling. The data should be processed as quickly as possible to provide users with the input for their analysis. Processing of multiple hundreds of terabytes of data necessitates generation, submission and following a huge number of grid jobs running all over the Computing Grid. Manipulation of these large and complex workloads is impossible without powerful production management tools. In LHCb, the DIRAC Production Management System (PMS) is used to accomplish this task. It enables production managers and end-users to deal with all kinds of data generation, processing and storage. Application workflow tools allow to define jobs as complex sequences of elementary application steps expressed as Directed Acyclic Graphs. Specialized databases and a number of dedicated software agents ensure automated data driven job creation and submission. The productions are accomplished by thorough ...

  5. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  6. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  7. Providing the physical basis of SCS curve number method and its proportionality relationship from Richards' equation

    Science.gov (United States)

    Hooshyar, M.; Wang, D.

    2016-12-01

    The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.

  8. Nice numbers

    CERN Document Server

    Barnes, John

    2016-01-01

    In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...

  9. Self-Perceived End-of-Life Care Competencies of Health-Care Providers at a Large Academic Medical Center.

    Science.gov (United States)

    Montagnini, Marcos; Smith, Heather M; Price, Deborah M; Ghosh, Bidisha; Strodtman, Linda

    2018-01-01

    In the United States, most deaths occur in hospitals, with approximately 25% of hospitalized patients having palliative care needs. Therefore, the provision of good end-of-life (EOL) care to these patients is a priority. However, research assessing staff preparedness for the provision of EOL care to hospitalized patients is lacking. To assess health-care professionals' self-perceived competencies regarding the provision of EOL care in hospitalized patients. Descriptive study of self-perceived EOL care competencies among health-care professionals. The study instrument (End-of-Life Questionnaire) contains 28 questions assessing knowledge, attitudes, and behaviors related to the provision of EOL care. Health-care professionals (nursing, medicine, social work, psychology, physical, occupational and respiratory therapist, and spiritual care) at a large academic medical center participated in the study. Means were calculated for each item, and comparisons of mean scores were conducted via t tests. Analysis of variance was used to identify differences among groups. A total of 1197 questionnaires was completed. The greatest self-perceived competency was in providing emotional support for patients/families, and the least self-perceived competency was in providing continuity of care. When compared to nurses, physicians had higher scores on EOL care attitudes, behaviors, and communication. Physicians and nurses had higher scores on most subscales than other health-care providers. Differences in self-perceived EOL care competencies were identified among disciplines, particularly between physicians and nurses. The results provide evidence for assessing health-care providers to identify their specific training needs before implementing educational programs on EOL care.

  10. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  11. Developing a case mix classification for child and adolescent mental health services: the influence of presenting problems, complexity factors and service providers on number of appointments.

    Science.gov (United States)

    Martin, Peter; Davies, Roger; Macdougall, Amy; Ritchie, Benjamin; Vostanis, Panos; Whale, Andy; Wolpert, Miranda

    2017-09-01

    Case-mix classification is a focus of international attention in considering how best to manage and fund services, by providing a basis for fairer comparison of resource utilization. Yet there is little evidence of the best ways to establish case mix for child and adolescent mental health services (CAMHS). To develop a case mix classification for CAMHS that is clinically meaningful and predictive of number of appointments attended and to investigate the influence of presenting problems, context and complexity factors and provider variation. We analysed 4573 completed episodes of outpatient care from 11 English CAMHS. Cluster analysis, regression trees and a conceptual classification based on clinical best practice guidelines were compared regarding their ability to predict number of appointments, using mixed effects negative binomial regression. The conceptual classification is clinically meaningful and did as well as data-driven classifications in accounting for number of appointments. There was little evidence for effects of complexity or context factors, with the possible exception of school attendance problems. Substantial variation in resource provision between providers was not explained well by case mix. The conceptually-derived classification merits further testing and development in the context of collaborative decision making.

  12. Source-Independent Quantum Random Number Generation

    Science.gov (United States)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  13. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    International Nuclear Information System (INIS)

    Hasegawa, K.; Lim, C.S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario

  14. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    Science.gov (United States)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  15. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    OpenAIRE

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  16. Large transverse momentum phenomena

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1977-09-01

    It is pointed out that it is particularly significant that the quantum numbers of the leading particles are strongly correlated with the quantum numbers of the incident hadrons indicating that the valence quarks themselves are transferred to large p/sub t/. The crucial question is how they get there. Various hadron reactions are discussed covering the structure of exclusive reactions, inclusive reactions, normalization of inclusive cross sections, charge correlations, and jet production at large transverse momentum. 46 references

  17. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  18. Asymptotic Analysis of Large Cooperative Relay Networks Using Random Matrix Theory

    Directory of Open Access Journals (Sweden)

    H. Poor

    2008-04-01

    Full Text Available Cooperative transmission is an emerging communication technology that takes advantage of the broadcast nature of wireless channels. In cooperative transmission, the use of relays can create a virtual antenna array so that multiple-input/multiple-output (MIMO techniques can be employed. Most existing work in this area has focused on the situation in which there are a small number of sources and relays and a destination. In this paper, cooperative relay networks with large numbers of nodes are analyzed, and in particular the asymptotic performance improvement of cooperative transmission over direction transmission and relay transmission is analyzed using random matrix theory. The key idea is to investigate the eigenvalue distributions related to channel capacity and to analyze the moments of this distribution in large wireless networks. A performance upper bound is derived, the performance in the low signal-to-noise-ratio regime is analyzed, and two approximations are obtained for high and low relay-to-destination link qualities, respectively. Finally, simulations are provided to validate the accuracy of the analytical results. The analysis in this paper provides important tools for the understanding and the design of large cooperative wireless networks.

  19. Factors influencing business of mobile telecommunication service providers in Vietnam

    Directory of Open Access Journals (Sweden)

    Ha Thanh Hai

    2018-05-01

    Full Text Available According to the Ministry of Information and Communications in Vietnam, as of November 2015, the number of mobile subscribers is over 120 million ones, accounting for over 86% of the total number of telephone subscribers. With a total population of over 92 million Vietnam citizens, a stable national economy and a large populations of young consumers in the country, mobile communication industries still have a huge potentials for future development. Telecommunication service providers in Vietnam are facing fierce competition. Subscribers are expecting OTT (Over the Top applications, good quality service and handset subsidy. This study investigated whether legal frameworks, OTT applications, quality of service and handset subsidy are important components of mobile telecommunication service in Vietnam. This study used quantitative method to distribute surveys to mobile subscribers. Findings found that all four factors significantly influence mobile business in Vietnam. Thus, telecommunication service providers in Vietnam must continuously innovate to enhance operational competitiveness, improve business efficiency, expand business scale, and improve its position in the market in order to ensure sustainable development. Moreover, Vietnamese government needs to develop a legal framework to help mobile telecommunication service providers enhance the common interests and benefits of the entire society.

  20. Smart Energy Cryo-refrigerator Technology for the next generation Very Large Array

    Science.gov (United States)

    Spagna, Stefano

    2018-01-01

    We describe a “smart energy” cryocooler technology architecture for the next generation Very Large Array that makes use of multiple variable frequency cold heads driven from a single variable speed air cooled compressor. Preliminary experiments indicate that the compressor variable flow control, advanced diagnostics, and the cryo-refrigerator low vibration, provide a unique energy efficient capability for the very large number of antennas that will be employed in this array.

  1. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  2. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  3. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  4. Tools for the Automation of Large Distributed Control Systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit - SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting is real-time to changes in the system, thus providing for the automation of standard procedures and for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  5. Schmidt number for quantum operations

    International Nuclear Information System (INIS)

    Huang Siendong

    2006-01-01

    To understand how entangled states behave under local quantum operations is an open problem in quantum-information theory. The Jamiolkowski isomorphism provides a natural way to study this problem in terms of quantum states. We introduce the Schmidt number for quantum operations by this duality and clarify how the Schmidt number of a quantum state changes under a local quantum operation. Some characterizations of quantum operations with Schmidt number k are also provided

  6. Beurling generalized numbers

    CERN Document Server

    Diamond, Harold G; Cheung, Man Ping

    2016-01-01

    "Generalized numbers" is a multiplicative structure introduced by A. Beurling to study how independent prime number theory is from the additivity of the natural numbers. The results and techniques of this theory apply to other systems having the character of prime numbers and integers; for example, it is used in the study of the prime number theorem (PNT) for ideals of algebraic number fields. Using both analytic and elementary methods, this book presents many old and new theorems, including several of the authors' results, and many examples of extremal behavior of g-number systems. Also, the authors give detailed accounts of the L^2 PNT theorem of J. P. Kahane and of the example created with H. L. Montgomery, showing that additive structure is needed for proving the Riemann hypothesis. Other interesting topics discussed are propositions "equivalent" to the PNT, the role of multiplicative convolution and Chebyshev's prime number formula for g-numbers, and how Beurling theory provides an interpretation of the ...

  7. Providing the Public with Online Access to Large Bibliographic Data Bases.

    Science.gov (United States)

    Firschein, Oscar; Summit, Roger K.

    DIALOG, an interactive, computer-based information retrieval language, consists of a series of computer programs designed to make use of direct access memory devices in order to provide the user with a rapid means of identifying records within a specific memory bank. Using the system, a library user can be provided access to sixteen distinct and…

  8. Rational-number comparison across notation: Fractions, decimals, and whole numbers.

    Science.gov (United States)

    Hurst, Michelle; Cordes, Sara

    2016-02-01

    Although fractions, decimals, and whole numbers can be used to represent the same rational-number values, it is unclear whether adults conceive of these rational-number magnitudes as lying along the same ordered mental continuum. In the current study, we investigated whether adults' processing of rational-number magnitudes in fraction, decimal, and whole-number notation show systematic ratio-dependent responding characteristic of an integrated mental continuum. Both reaction time (RT) and eye-tracking data from a number-magnitude comparison task revealed ratio-dependent performance when adults compared the relative magnitudes of rational numbers, both within the same notation (e.g., fractions vs. fractions) and across different notations (e.g., fractions vs. decimals), pointing to an integrated mental continuum for rational numbers across notation types. In addition, eye-tracking analyses provided evidence of an implicit whole-number bias when we compared values in fraction notation, and individual differences in this whole-number bias were related to the individual's performance on a fraction arithmetic task. Implications of our results for both cognitive development research and math education are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats.

    Science.gov (United States)

    Armour, John A L; Palla, Raquel; Zeeuwen, Patrick L J M; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies.

  10. Stabilization of atoms with nonzero magnetic quantum numbers

    International Nuclear Information System (INIS)

    Sundaram, B.; Jensen, R.V.

    1993-01-01

    A classical analysis of the interaction of an atomic electron with an oscillating electric field with arbitrary initial quantum number, n, magnetic quantum number, m > 0, field strength, and frequency shows that the classical, dynamics for the perturbed electron can be stabilized for large fields and high frequencies. Using a four-dimensional map approximation to the classical dynamics, explicit expressions are obtained for the full parameter dependence of the boundaries of stability surrounding the open-quotes death valleyclose quotes of rapid classical ionization. A preliminary analysis of the quantum dynamics in terms of the quasienergy states associated with the corresponding quantum map is also included with particular emphasis on the role of unstable classical structures in stabilizing atoms. Together, these results provide motivation and direction for further theoretical and experimental studies of stabilization of atoms (and molecules) in super-intense microwave and laser fields

  11. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  12. Exploring Stigma by Association among Front-Line Care Providers Serving Sex Workers

    OpenAIRE

    Phillips, Rachel; Benoit, Cecilia

    2013-01-01

    Stigma by association, also referred to as “courtesy stigma,” involves public disapproval evoked as a consequence of associating with stigmatized persons. While a small number of sociological studies have shown how stigma by association limits the social support and social opportunities available to family members, there is a paucity of research examining this phenomenon among the large network of persons who provide health and social services to stigmatized groups. This paper presents result...

  13. Provider software buyer's guide.

    Science.gov (United States)

    1994-03-01

    To help long term care providers find new ways to improve quality of care and efficiency, Provider magazine presents the fourth annual listing of software firms marketing computer programs for all areas of nursing facility operations. On the following five pages, more than 80 software firms display their wares, with programs such as minimum data set and care planning, dietary, accounting and financials, case mix, and medication administration records. The guide also charts compatible hardware, integration ability, telephone numbers, company contacts, and easy-to-use reader service numbers.

  14. Effective number of breeders provides a link between interannual variation in stream flow and individual reproductive contribution in a stream salmonid.

    Science.gov (United States)

    Whiteley, Andrew R; Coombs, Jason A; Cembrola, Matthew; O'Donnell, Matthew J; Hudy, Mark; Nislow, Keith H; Letcher, Benjamin H

    2015-07-01

    The effective number of breeders that give rise to a cohort (N(b)) is a promising metric for genetic monitoring of species with overlapping generations; however, more work is needed to understand factors that contribute to variation in this measure in natural populations. We tested hypotheses related to interannual variation in N(b) in two long-term studies of brook trout populations. We found no supporting evidence for our initial hypothesis that N^(b) reflects N^(c) (defined as the number of adults in a population at the time of reproduction). N^(b) was stable relative to N^(C) and did not follow trends in abundance (one stream negative, the other positive). We used stream flow estimates to test the alternative hypothesis that environmental factors constrain N(b). We observed an intermediate optimum autumn stream flow for both N^(b) (R(2) = 0.73, P = 0.02) and full-sibling family evenness (R(2) = 0.77, P = 0.01) in one population and a negative correlation between autumn stream flow and full-sib family evenness in the other population (r = -0.95, P = 0.02). Evidence for greater reproductive skew at the lowest and highest autumn flow was consistent with suboptimal conditions at flow extremes. A series of additional tests provided no supporting evidence for a related hypothesis that density-dependent reproductive success was responsible for the lack of relationship between N(b) and N(C) (so-called genetic compensation). This work provides evidence that N(b) is a useful metric of population-specific individual reproductive contribution for genetic monitoring across populations and the link we provide between stream flow and N(b) could be used to help predict population resilience to environmental change. © 2015 John Wiley & Sons Ltd.

  15. Some types of parent number talk count more than others: relations between parents' input and children's cardinal-number knowledge.

    Science.gov (United States)

    Gunderson, Elizabeth A; Levine, Susan C

    2011-09-01

    Before they enter preschool, children vary greatly in their numerical and mathematical knowledge, and this knowledge predicts their achievement throughout elementary school (e.g. Duncan et al., 2007; Ginsburg & Russell, 1981). Therefore, it is critical that we look to the home environment for parental inputs that may lead to these early variations. Recent work has shown that the amount of number talk that parents engage in with their children is robustly related to a critical aspect of mathematical development - cardinal-number knowledge (e.g. knowing that the word 'three' refers to sets of three entities; Levine, Suriyakham, Rowe, Huttenlocher & Gunderson, 2010). The present study characterizes the different types of number talk that parents produce and investigates which types are most predictive of children's later cardinal-number knowledge. We find that parents' number talk involving counting or labeling sets of present, visible objects is related to children's later cardinal-number knowledge, whereas other types of parent number talk are not. In addition, number talk that refers to large sets of present objects (i.e. sets of size 4 to 10 that fall outside children's ability to track individual objects) is more robustly predictive of children's later cardinal-number knowledge than talk about smaller sets. The relation between parents' number talk about large sets of present objects and children's cardinal-number knowledge remains significant even when controlling for factors such as parents' socioeconomic status and other measures of parents' number and non-number talk. © 2011 Blackwell Publishing Ltd.

  16. Fourier analysis in combinatorial number theory

    International Nuclear Information System (INIS)

    Shkredov, Il'ya D

    2010-01-01

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  17. Fourier analysis in combinatorial number theory

    Energy Technology Data Exchange (ETDEWEB)

    Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-09-16

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  18. Large Display Interaction Using Mobile Devices

    OpenAIRE

    Bauer, Jens

    2015-01-01

    Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diver...

  19. Accurate measurement of gene copy number for human alpha-defensin DEFA1A3.

    Science.gov (United States)

    Khan, Fayeza F; Carpenter, Danielle; Mitchell, Laura; Mansouri, Omniah; Black, Holly A; Tyson, Jess; Armour, John A L

    2013-10-20

    Multi-allelic copy number variants include examples of extensive variation between individuals in the copy number of important genes, most notably genes involved in immune function. The definition of this variation, and analysis of its impact on function, has been hampered by the technical difficulty of large-scale but accurate typing of genomic copy number. The copy-variable alpha-defensin locus DEFA1A3 on human chromosome 8 commonly varies between 4 and 10 copies per diploid genome, and presents considerable challenges for accurate high-throughput typing. In this study, we developed two paralogue ratio tests and three allelic ratio measurements that, in combination, provide an accurate and scalable method for measurement of DEFA1A3 gene number. We combined information from different measurements in a maximum-likelihood framework which suggests that most samples can be assigned to an integer copy number with high confidence, and applied it to typing 589 unrelated European DNA samples. Typing the members of three-generation pedigrees provided further reassurance that correct integer copy numbers had been assigned. Our results have allowed us to discover that the SNP rs4300027 is strongly associated with DEFA1A3 gene copy number in European samples. We have developed an accurate and robust method for measurement of DEFA1A3 copy number. Interrogation of rs4300027 and associated SNPs in Genome-Wide Association Study SNP data provides no evidence that alpha-defensin copy number is a strong risk factor for phenotypes such as Crohn's disease, type I diabetes, HIV progression and multiple sclerosis.

  20. Source-Independent Quantum Random Number Generation

    Directory of Open Access Journals (Sweden)

    Zhu Cao

    2016-02-01

    Full Text Available Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5×10^{3}  bit/s.

  1. Transitional boundary layer in low-Prandtl-number convection at high Rayleigh number

    Science.gov (United States)

    Schumacher, Joerg; Bandaru, Vinodh; Pandey, Ambrish; Scheel, Janet

    2016-11-01

    The boundary layer structure of the velocity and temperature fields in turbulent Rayleigh-Bénard flows in closed cylindrical cells of unit aspect ratio is revisited from a transitional and turbulent viscous boundary layer perspective. When the Rayleigh number is large enough the boundary layer dynamics at the bottom and top plates can be separated into an impact region of downwelling plumes, an ejection region of upwelling plumes and an interior region (away from side walls) that is dominated by a shear flow of varying orientation. This interior plate region is compared here to classical wall-bounded shear flows. The working fluid is liquid mercury or liquid gallium at a Prandtl number of Pr = 0 . 021 for a range of Rayleigh numbers of 3 ×105 Deutsche Forschungsgemeinschaft.

  2. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  3. Optimal Chunking of Large Multidimensional Arrays for Data Warehousing

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron; Seshadri, Sridhar

    2008-02-15

    Very large multidimensional arrays are commonly used in data intensive scientific computations as well as on-line analytical processingapplications referred to as MOLAP. The storage organization of such arrays on disks is done by partitioning the large global array into fixed size sub-arrays called chunks or tiles that form the units of data transfer between disk and memory. Typical queries involve the retrieval of sub-arrays in a manner that access all chunks that overlap the query results. An important metric of the storage efficiency is the expected number of chunks retrieved over all such queries. The question that immediately arises is"what shapes of array chunks give the minimum expected number of chunks over a query workload?" The problem of optimal chunking was first introduced by Sarawagi and Stonebraker who gave an approximate solution. In this paper we develop exact mathematical models of the problem and provide exact solutions using steepest descent and geometric programming methods. Experimental results, using synthetic and real life workloads, show that our solutions are consistently within than 2.0percent of the true number of chunks retrieved for any number of dimensions. In contrast, the approximate solution of Sarawagi and Stonebraker can deviate considerably from the true result with increasing number of dimensions and also may lead to suboptimal chunk shapes.

  4. Global analysis of seagrass restoration: the importance of large-scale planting

    KAUST Repository

    van Katwijk, Marieke M.; Thorhaug, Anitra; Marbà , Nú ria; Orth, Robert J.; Duarte, Carlos M.; Kendrick, Gary A.; Althuizen, Inge H. J.; Balestri, Elena; Bernard, Guillaume; Cambridge, Marion L.; Cunha, Alexandra; Durance, Cynthia; Giesen, Wim; Han, Qiuying; Hosokawa, Shinya; Kiswara, Wawan; Komatsu, Teruhisa; Lardicci, Claudio; Lee, Kun-Seop; Meinesz, Alexandre; Nakaoka, Masahiro; O'Brien, Katherine R.; Paling, Erik I.; Pickerell, Chris; Ransijn, Aryan M. A.; Verduin, Jennifer J.

    2015-01-01

    In coastal and estuarine systems, foundation species like seagrasses, mangroves, saltmarshes or corals provide important ecosystem services. Seagrasses are globally declining and their reintroduction has been shown to restore ecosystem functions. However, seagrass restoration is often challenging, given the dynamic and stressful environment that seagrasses often grow in. From our world-wide meta-analysis of seagrass restoration trials (1786 trials), we describe general features and best practice for seagrass restoration. We confirm that removal of threats is important prior to replanting. Reduced water quality (mainly eutrophication), and construction activities led to poorer restoration success than, for instance, dredging, local direct impact and natural causes. Proximity to and recovery of donor beds were positively correlated with trial performance. Planting techniques can influence restoration success. The meta-analysis shows that both trial survival and seagrass population growth rate in trials that survived are positively affected by the number of plants or seeds initially transplanted. This relationship between restoration scale and restoration success was not related to trial characteristics of the initial restoration. The majority of the seagrass restoration trials have been very small, which may explain the low overall trial survival rate (i.e. estimated 37%). Successful regrowth of the foundation seagrass species appears to require crossing a minimum threshold of reintroduced individuals. Our study provides the first global field evidence for the requirement of a critical mass for recovery, which may also hold for other foundation species showing strong positive feedback to a dynamic environment. Synthesis and applications. For effective restoration of seagrass foundation species in its typically dynamic, stressful environment, introduction of large numbers is seen to be beneficial and probably serves two purposes. First, a large-scale planting

  5. Global analysis of seagrass restoration: the importance of large-scale planting

    KAUST Repository

    van Katwijk, Marieke M.

    2015-10-28

    In coastal and estuarine systems, foundation species like seagrasses, mangroves, saltmarshes or corals provide important ecosystem services. Seagrasses are globally declining and their reintroduction has been shown to restore ecosystem functions. However, seagrass restoration is often challenging, given the dynamic and stressful environment that seagrasses often grow in. From our world-wide meta-analysis of seagrass restoration trials (1786 trials), we describe general features and best practice for seagrass restoration. We confirm that removal of threats is important prior to replanting. Reduced water quality (mainly eutrophication), and construction activities led to poorer restoration success than, for instance, dredging, local direct impact and natural causes. Proximity to and recovery of donor beds were positively correlated with trial performance. Planting techniques can influence restoration success. The meta-analysis shows that both trial survival and seagrass population growth rate in trials that survived are positively affected by the number of plants or seeds initially transplanted. This relationship between restoration scale and restoration success was not related to trial characteristics of the initial restoration. The majority of the seagrass restoration trials have been very small, which may explain the low overall trial survival rate (i.e. estimated 37%). Successful regrowth of the foundation seagrass species appears to require crossing a minimum threshold of reintroduced individuals. Our study provides the first global field evidence for the requirement of a critical mass for recovery, which may also hold for other foundation species showing strong positive feedback to a dynamic environment. Synthesis and applications. For effective restoration of seagrass foundation species in its typically dynamic, stressful environment, introduction of large numbers is seen to be beneficial and probably serves two purposes. First, a large-scale planting

  6. A new pseudorandom number generator based on a complex number chaotic equation

    International Nuclear Information System (INIS)

    Liu Yang; Tong Xiao-Jun

    2012-01-01

    In recent years, various chaotic equation based pseudorandom number generators have been proposed. However, the chaotic equations are all defined in the real number field. In this paper, an equation is proposed and proved to be chaotic in the imaginary axis. And a pseudorandom number generator is constructed based on the chaotic equation. The alteration of the definitional domain of the chaotic equation from the real number field to the complex one provides a new approach to the construction of chaotic equations, and a new method to generate pseudorandom number sequences accordingly. Both theoretical analysis and experimental results show that the sequences generated by the proposed pseudorandom number generator possess many good properties

  7. Nuclear refugees after large radioactive releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Groell, Jérôme

    2016-01-01

    However improbable, large radioactive releases from a nuclear power plant would entail major consequences for the surrounding population. In Fukushima, 80,000 people had to evacuate the most contaminated areas around the NPP for a prolonged period of time. These people have been called “nuclear refugees”. The paper first argues that the number of nuclear refugees is a better measure of the severity of radiological consequences than the number of fatalities, although the latter is widely used to assess other catastrophic events such as earthquakes or tsunami. It is a valuable partial indicator in the context of comprehensive studies of overall consequences. Section 2 makes a clear distinction between long-term relocation and emergency evacuation and proposes a method to estimate the number of refugees. Section 3 examines the distribution of nuclear refugees with respect to weather and release site. The distribution is asymmetric and fat-tailed: unfavorable weather can lead to the contamination of large areas of land; large cities have in turn a higher probability of being contaminated. - Highlights: • Number of refugees is a good indicator of the severity of radiological consequences. • It is a better measure of the long-term consequences than the number of fatalities. • A representative meteorological sample should be sufficiently large. • The number of refugees highly depends on the release site in a country like France.

  8. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    Directory of Open Access Journals (Sweden)

    Débora Jardim-Messeder

    2017-12-01

    Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal

  9. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  10. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  11. Funny Numbers

    Directory of Open Access Journals (Sweden)

    Theodore M. Porter

    2012-12-01

    Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.

  12. 5000 sustainable workplaces - Wood energy provides work

    International Nuclear Information System (INIS)

    Keel, A.

    2009-01-01

    This article presents the results of a study made by the Swiss Wood Energy Association on the regional and national added value resulting from large wood-fired installations in Switzerland. The number of workplaces created by these installations is also noted. Wood energy is quoted as not only being a way of using forest wastes but also as being a creator of employment. Large wood-fired heating installations are commented on and efforts to promote this type of energy supply even further are discussed. The study indicates which professions benefit from the use of wood energy and quantifies the number of workplaces per megawatt of installed power that result.

  13. Symmetry mappings concomitant to particle-number-conservation-baryon-number conservation

    International Nuclear Information System (INIS)

    Davis, W.R.

    1977-01-01

    Four theorem serve to demonstrate that matter fields in space-time admit certain timelike symmetry mappings concomitant to the familiar notion of particle number conservation, which can be more fundamentally accounted for by a type of projective invariance principle. These particular symmetry mappings include a family of symmetry properties that may be admitted by Riemannian space-times. In their strongest form, the results obtained provide some insight relating to the conservation of baryon number

  14. Asymptotic numbers, asymptotic functions and distributions

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1979-07-01

    The asymptotic functions are a new type of generalized functions. But they are not functionals on some space of test-functions as the distributions of Schwartz. They are mappings of the set denoted by A into A, where A is the set of the asymptotic numbers introduced by Christov. On its part A is a totally-ordered set of generalized numbers including the system of real numbers R as well as infinitesimals and infinitely large numbers. Every two asymptotic functions can be multiplied. On the other hand, the distributions have realizations as asymptotic functions in a certain sense. (author)

  15. Large-Scale Flight Phase Identification from ADS-B Data Using Machine Learning Methods

    NARCIS (Netherlands)

    Sun, J.; Ellerbroek, J.; Hoekstra, J.M.; Lovell, D.; Fricke, H.

    2016-01-01

    With the increasing availability of ADS-B transponders on commercial aircraft, as well as the rapidly growing deployment of ground stations that provide public access to their data, accessing open aircraft flight data is becoming easier for researchers. Given the large number of operational

  16. Simplified Deployment of Health Informatics Applications by Providing Docker Images.

    Science.gov (United States)

    Löbe, Matthias; Ganslandt, Thomas; Lotzmann, Lydia; Mate, Sebastian; Christoph, Jan; Baum, Benjamin; Sariyar, Murat; Wu, Jie; Stäubert, Sebastian

    2016-01-01

    Due to the specific needs of biomedical researchers, in-house development of software is widespread. A common problem is to maintain and enhance software after the funded project has ended. Even if many tools are made open source, only a couple of projects manage to attract a user basis large enough to ensure sustainability. Reasons for this include complex installation and configuration of biomedical software as well as an ambiguous terminology of the features provided; all of which make evaluation of software laborious. Docker is a para-virtualization technology based on Linux containers that eases deployment of applications and facilitates evaluation. We investigated a suite of software developments funded by a large umbrella organization for networked medical research within the last 10 years and created Docker containers for a number of applications to support utilization and dissemination.

  17. Large inserts for big data: artificial chromosomes in the genomic era.

    Science.gov (United States)

    Tocchetti, Arianna; Donadio, Stefano; Sosio, Margherita

    2018-05-01

    The exponential increase in available microbial genome sequences coupled with predictive bioinformatic tools is underscoring the genetic capacity of bacteria to produce an unexpected large number of specialized bioactive compounds. Since most of the biosynthetic gene clusters (BGCs) present in microbial genomes are cryptic, i.e. not expressed under laboratory conditions, a variety of cloning systems and vectors have been devised to harbor DNA fragments large enough to carry entire BGCs and to allow their transfer in suitable heterologous hosts. This minireview provides an overview of the vectors and approaches that have been developed for cloning large BGCs, and successful examples of heterologous expression.

  18. Provider Use of a Novel EHR display in the Pediatric Intensive Care Unit. Large Customizable Interactive Monitor (LCIM).

    Science.gov (United States)

    Asan, Onur; Holden, Richard J; Flynn, Kathryn E; Yang, Yushi; Azam, Laila; Scanlon, Matthew C

    2016-07-20

    The purpose of this study was to explore providers' perspectives on the use of a novel technology, "Large Customizable Interactive Monitor" (LCIM), a novel application of the electronic health record system implemented in a Pediatric Intensive Care Unit. We employed a qualitative approach to collect and analyze data from pediatric intensive care physicians, pediatric nurse practitioners, and acute care specialists. Using semi-structured interviews, we collected data from January to April, 2015. The research team analyzed the transcripts using an iterative coding method to identify common themes. Study results highlight contextual data on providers' use routines of the LCIM. Findings from thirty six interviews were classified into three groups: 1) providers' familiarity with the LCIM; 2) providers' use routines (i.e. when and how they use it); and 3) reasons why they use or do not use it. It is important to conduct baseline studies of the use of novel technologies. The importance of training and orientation affects the adoption and use patterns of this new technology. This study is notable for being the first to investigate a LCIM system, a next generation system implemented in the pediatric critical care setting. Our study revealed this next generation HIT might have great potential for family-centered rounds, team education during rounds, and family education/engagement in their child's health in the patient room. This study also highlights the effect of training and orientation on the adoption patterns of new technology.

  19. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  20. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  1. Low-Reynolds number compressible flow around a triangular airfoil

    Science.gov (United States)

    Munday, Phillip; Taira, Kunihiko; Suwa, Tetsuya; Numata, Daiju; Asai, Keisuke

    2013-11-01

    We report on the combined numerical and experimental effort to analyze the nonlinear aerodynamics of a triangular airfoil in low-Reynolds number compressible flow that is representative of wings on future Martian air vehicles. The flow field around this airfoil is examined for a wide range of angles of attack and Mach numbers with three-dimensional direct numerical simulations at Re = 3000 . Companion experiments are conducted in a unique Martian wind tunnel that is placed in a vacuum chamber to simulate the Martian atmosphere. Computational findings are compared with pressure sensitive paint and direct force measurements and are found to be in agreement. The separated flow from the leading edge is found to form a large leading-edge vortex that sits directly above the apex of the airfoil and provides enhanced lift at post stall angles of attack. For higher subsonic flows, the vortical structures elongate in the streamwise direction resulting in reduced lift enhancement. We also observe that the onset of spanwise instability for higher angles of attack is delayed at lower Mach numbers. Currently at Mitsubishi Heavy Industries, Ltd., Nagasaki.

  2. Work-Related Musculoskeletal Symptoms and Job Factors Among Large-Herd Dairy Milkers.

    Science.gov (United States)

    Douphrate, David I; Nonnenmann, Matthew W; Hagevoort, Robert; Gimeno Ruiz de Porras, David

    2016-01-01

    Dairy production in the United States is moving towards large-herd milking operations, resulting in an increase in task specialization and work demands. The objective of this project was to provide preliminary evidence of the association of a number of specific job conditions that commonly characterize large-herd parlor milking operations with work-related musculoskeletal symptoms (MSS). A modified version of the Standardized Nordic Questionnaire was administered to assess MSS prevalence among 450 US large-herd parlor workers. Worker demographics and MSS prevalences were generated. Prevalence ratios were also generated to determine associations of a number of specific job conditions that commonly characterize large-herd parlor milking operations with work-related MSS. Work-related MSS are prevalent among large-herd parlor workers, since nearly 80% report 12-month prevalences of one or more symptoms, which are primarily located in the upper extremities, specifically shoulders and wrist/hand. Specific large-herd milking parlor job conditions are associated with MSS in multiple body regions, including performing the same task repeatedly, insufficient rest breaks, working when injured, static postures, adverse environmental conditions, and reaching overhead. These findings support the need for administrative and engineering solutions aimed at reducing exposure to job risk factors for work-related MSS among large-herd parlor workers.

  3. The Efficiency and Productivity Analysis of Large Logistics Providers Services in Korea

    Directory of Open Access Journals (Sweden)

    Hong Gyun Park

    2015-12-01

    Full Text Available In the fierce competition at the global logistics markets, Korean logistics providers were deemed more vulnerable than global logistics providers in terms of the quality and price competitiveness. To strengthen their competitiveness, logistics providers in Korea have focused on delivering integrated logistics services. In this regard, the Korean government has enacted the “Integrated Logistics Industry Certification Act” in 2006 to assist integrated logistics providers to offer logistics services based on their specialization and differentiation. It has been several years since the system was implemented, and the evaluation of the system implementation was necessary. Hence, in our study, we attempt to examine the efficiency and productivity of fourteen certified Korean logistics providers employing the DEA (Data Envelopment Analysis method with a five-year panel data since the inception of the Act. Through our static and dynamic analyses, We found that Pantos Logistics and HYUNDAI Glovis are running their businesses at the highest level of efficiency and Hanjin Transportation was the most stable company in their logistics operation.

  4. Identifying Copy Number Variants under Selection in Geographically Structured Populations Based on -statistics

    Directory of Open Access Journals (Sweden)

    Hae-Hiang Song

    2012-06-01

    Full Text Available Large-scale copy number variants (CNVs in the human provide the raw material for delineating population differences, as natural selection may have affected at least some of the CNVs thus far discovered. Although the examination of relatively large numbers of specific ethnic groups has recently started in regard to inter-ethnic group differences in CNVs, identifying and understanding particular instances of natural selection have not been performed. The traditional FST measure, obtained from differences in allele frequencies between populations, has been used to identify CNVs loci subject to geographically varying selection. Here, we review advances and the application of multinomial-Dirichlet likelihood methods of inference for identifying genome regions that have been subject to natural selection with the FST estimates. The contents of presentation are not new; however, this review clarifies how the application of the methods to CNV data, which remains largely unexplored, is possible. A hierarchical Bayesian method, which is implemented via Markov Chain Monte Carlo, estimates locus-specific FST and can identify outlying CNVs loci with large values of FST. By applying this Bayesian method to the publicly available CNV data, we identified the CNV loci that show signals of natural selection, which may elucidate the genetic basis of human disease and diversity.

  5. The business end of health information technology. Can a fully integrated electronic health record increase provider productivity in a large community practice?

    Science.gov (United States)

    De Leon, Samantha; Connelly-Flores, Alison; Mostashari, Farzad; Shih, Sarah C

    2010-01-01

    Electronic health records (EHRs) are expected to transform and improve the way medicine is practiced. However, providers perceive many barriers toward implementing new health information technology. Specifically, they are most concerned about the potentially negative impact on their practice finances and productivity. This study compares the productivity of 75 providers at a large urban primary care practice from January 2005 to February 2009, before and after implementing an EHR system, using longitudinal mixed model analyses. While decreases in productivity were observed at the time the EHR system was implemented, most providers quickly recovered, showing increases in productivity per month shortly after EHR implementation. Overall, providers had significant productivity increases of 1.7% per month per provider from pre- to post-EHR adoption. The majority of the productivity gains occurred after the practice instituted a pay-for-performance program, enabled by the data capture of the EHRs. Coupled with pay-for-performance, EHRs can spur rapid gains in provider productivity.

  6. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  7. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  8. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  9. Film behaviour of vertical gas-liquid flow in a large diameter pipe

    OpenAIRE

    Zangana, Mohammed Haseeb Sedeeq

    2011-01-01

    Gas-liquid flow commonly occurs in oil and gas production and processing system. Large diameter vertical pipes can reduce pressure drops and so minimize operating costs. However, there is a need for research on two-phase flow in large diameter pipes to provide confidence to designers of equipments such as deep water risers. In this study a number of experimental campaigns were carried out to measure pressure drop, liquid film thickness and wall shear in 127mm vertical pipe. Total pressur...

  10. Detection of large numbers of novel sequences in the metatranscriptomes of complex marine microbial communities.

    Science.gov (United States)

    Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian

    2008-08-22

    Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.

  11. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  12. Management systems for service providers

    International Nuclear Information System (INIS)

    Bolokonya, Herbert Chiwalo

    2015-02-01

    In the field of radiation safety and protection there are a number of institutions that are involved in achieving different goals and strategies. These strategies and objectives are achieved based on a number of tools and systems, one of these tools and systems is the use of a management system. This study aimed at reviewing the management system concept for Technical Service Providers in the field of radiation safety and protection. The main focus was on personal monitoring services provided by personal dosimetry laboratories. A number of key issues were found to be prominent to make the management system efficient. These are laboratory accreditation, approval; having a customer driven operating criteria; and controlling of records and good reporting. (au)

  13. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  14. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  15. Can live tree size-density relationships provide a mechanism for predicting down and dead tree resources?

    Science.gov (United States)

    Christopher Woodall; James Westfall

    2009-01-01

    Live tree size-density relationships in forests have long provided a framework for understanding stand dynamics. There has been little examination of the relationship between the size-density attributes of live and standing/down dead trees (e.g., number and mean tree size per unit area, such information could help in large-scale efforts to estimate dead wood resources...

  16. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  17. Implementation of provider-based electronic medical records and improvement of the quality of data in a large HIV program in Sub-Saharan Africa.

    Directory of Open Access Journals (Sweden)

    Barbara Castelnuovo

    Full Text Available INTRODUCTION: Starting in June 2010 the Infectious Diseases Institute (IDI clinic (a large urban HIV out-patient facility switched to provider-based Electronic Medical Records (EMR from paper EMR entered in the database by data-entry clerks. Standardized clinics forms were eliminated but providers still fill free text clinical notes in physical patients' files. The objective of this study was to compare the rate of errors in the database before and after the introduction of the provider-based EMR. METHODS AND FINDINGS: Data in the database pre and post provider-based EMR was compared with the information in the patients' files and classified as correct, incorrect, and missing. We calculated the proportion of incorrect, missing and total error for key variables (toxicities, opportunistic infections, reasons for treatment change and interruption. Proportions of total errors were compared using chi-square test. A survey of the users of the EMR was also conducted. We compared data from 2,382 visits (from 100 individuals of a retrospective validation conducted in 2007 with 34,957 visits (from 10,920 individuals of a prospective validation conducted in April-August 2011. The total proportion of errors decreased from 66.5% in 2007 to 2.1% in 2011 for opportunistic infections, from 51.9% to 3.5% for ART toxicity, from 82.8% to 12.5% for reasons for ART interruption and from 94.1% to 0.9% for reasons for ART switch (all P<0.0001. The survey showed that 83% of the providers agreed that provider-based EMR led to improvement of clinical care, 80% reported improved access to patients' records, and 80% appreciated the automation of providers' tasks. CONCLUSIONS: The introduction of provider-based EMR improved the quality of data collected with a significant reduction in missing and incorrect information. The majority of providers and clients expressed satisfaction with the new system. We recommend the use of provider-based EMR in large HIV programs in Sub

  18. On the instabilities of supersonic mixing layers - A high-Mach-number asymptotic theory

    Science.gov (United States)

    Balsa, Thomas F.; Goldstein, M. E.

    1990-01-01

    The stability of a family of tanh mixing layers is studied at large Mach numbers using perturbation methods. It is found that the eigenfunction develops a multilayered structure, and the eigenvalue is obtained by solving a simplified version of the Rayleigh equation (with homogeneous boundary conditions) in one of these layers which lies in either of the external streams. This analysis leads to a simple hypersonic similarity law which explains how spatial and temporal phase speeds and growth rates scale with Mach number and temperature ratio. Comparisons are made with numerical results, and it is found that this similarity law provides a good qualitative guide for the behavior of the instability at high Mach numbers. In addition to this asymptotic theory, some fully numerical results are also presented (with no limitation on the Mach number) in order to explain the origin of the hypersonic modes (through mode splitting) and to discuss the role of oblique modes over a very wide range of Mach number and temperature ratio.

  19. On powerful numbers

    Directory of Open Access Journals (Sweden)

    R. A. Mollin

    1986-01-01

    Full Text Available A powerful number is a positive integer n satisfying the property that p2 divides n whenever the prime p divides n; i.e., in the canonical prime decomposition of n, no prime appears with exponent 1. In [1], S.W. Golomb introduced and studied such numbers. In particular, he asked whether (25,27 is the only pair of consecutive odd powerful numbers. This question was settled in [2] by W.A. Sentance who gave necessary and sufficient conditions for the existence of such pairs. The first result of this paper is to provide a generalization of Sentance's result by giving necessary and sufficient conditions for the existence of pairs of powerful numbers spaced evenly apart. This result leads us naturally to consider integers which are representable as a proper difference of two powerful numbers, i.e. n=p1−p2 where p1 and p2 are powerful numbers with g.c.d. (p1,p2=1. Golomb (op.cit. conjectured that 6 is not a proper difference of two powerful numbers, and that there are infinitely many numbers which cannot be represented as a proper difference of two powerful numbers. The antithesis of this conjecture was proved by W.L. McDaniel [3] who verified that every non-zero integer is in fact a proper difference of two powerful numbers in infinitely many ways. McDaniel's proof is essentially an existence proof. The second result of this paper is a simpler proof of McDaniel's result as well as an effective algorithm (in the proof for explicitly determining infinitely many such representations. However, in both our proof and McDaniel's proof one of the powerful numbers is almost always a perfect square (namely one is always a perfect square when n≢2(mod4. We provide in §2 a proof that all even integers are representable in infinitely many ways as a proper nonsquare difference; i.e., proper difference of two powerful numbers neither of which is a perfect square. This, in conjunction with the odd case in [4], shows that every integer is representable in

  20. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  1. Un/Paid Labor: Medicaid Home and Community Based Services Waivers That Pay Family as Personal Care Providers

    Science.gov (United States)

    Friedman, Carli; Rizzolo, Mary C.

    2016-01-01

    The United States long-term services and supports system is built on largely unpaid (informal) labor. There are a number of benefits to allowing family caregivers to serve as paid personal care providers including better health and satisfaction outcomes, expanded workforces, and cost effectiveness. The purpose of this study was to examine how…

  2. THE DEVELOPMENT OF SERVICES IN THE FIELD OF SOCIAL WORK WITH LARGE FAMILIES

    Directory of Open Access Journals (Sweden)

    Юлія Ібрагім

    2015-09-01

    Full Text Available The development of social services in the field of social work with large families is analyzed and their basic functions are defined in the article. The necessity of implementation of the system of guaranteed social and educational support to families depending on the type of large families, which would be provided by governmental and nongovernmental organizations, including public organizations are considered and analyzed. The statistical data on the number and proportion of large families in Ukraine in general and namely in Kharkiv region are provided. The principle of subsidiarity, which is the basis of the full development of the state and is based on the redistribution of responsibility for the formation of family welfare from the state to the family, and receipt of primary social and pedagogical support of public organizations in case you are unable to solve certain problems is disclosed. The article also presents practical experience in providing social and educational support by public organization Association of large families “AMMA”” in Kharkiv.

  3. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  4. Large-scale DCMs for resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Adeel Razi

    2017-01-01

    Full Text Available This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity. This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI. We use spectral dynamic causal modeling (DCM to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of Bayesian model reduction to discover the most likely sparse graph (or model from a parent (e.g., fully connected graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM—with functional connectivity priors—is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.

  5. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  6. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  7. Prandtl-number Effects in High-Rayleigh-number Spherical Convection

    Science.gov (United States)

    Orvedahl, Ryan J.; Calkins, Michael A.; Featherstone, Nicholas A.; Hindman, Bradley W.

    2018-03-01

    Convection is the predominant mechanism by which energy and angular momentum are transported in the outer portion of the Sun. The resulting overturning motions are also the primary energy source for the solar magnetic field. An accurate solar dynamo model therefore requires a complete description of the convective motions, but these motions remain poorly understood. Studying stellar convection numerically remains challenging; it occurs within a parameter regime that is extreme by computational standards. The fluid properties of the convection zone are characterized in part by the Prandtl number \\Pr = ν/κ, where ν is the kinematic viscosity and κ is the thermal diffusion; in stars, \\Pr is extremely low, \\Pr ≈ 10‑7. The influence of \\Pr on the convective motions at the heart of the dynamo is not well understood since most numerical studies are limited to using \\Pr ≈ 1. We systematically vary \\Pr and the degree of thermal forcing, characterized through a Rayleigh number, to explore its influence on the convective dynamics. For sufficiently large thermal driving, the simulations reach a so-called convective free-fall state where diffusion no longer plays an important role in the interior dynamics. Simulations with a lower \\Pr generate faster convective flows and broader ranges of scales for equivalent levels of thermal forcing. Characteristics of the spectral distribution of the velocity remain largely insensitive to changes in \\Pr . Importantly, we find that \\Pr plays a key role in determining when the free-fall regime is reached by controlling the thickness of the thermal boundary layer.

  8. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  9. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  10. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  11. Essays on the theory of numbers

    CERN Document Server

    Dedekind, Richard

    1963-01-01

    Two classic essays by great German mathematician: one provides an arithmetic, rigorous foundation for the irrational numbers, the other is an attempt to give the logical basis for transfinite numbers and properties of the natural numbers.

  12. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Transcendental numbers

    CERN Document Server

    Murty, M Ram

    2014-01-01

    This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.

  14. From Calculus to Number Theory

    Indian Academy of Sciences (India)

    A. Raghuram

    2016-11-04

    Nov 4, 2016 ... diverges to infinity. This means given any number M, however large, we can add sufficiently many terms in the above series to make the sum larger than M. This was first proved by Nicole Oresme (1323-1382), a brilliant. French philosopher of his times.

  15. More Than a "Number": Perspectives of Prenatal Care Quality from Mothers of Color and Providers.

    Science.gov (United States)

    Coley, Sheryl L; Zapata, Jasmine Y; Schwei, Rebecca J; Mihalovic, Glen Ellen; Matabele, Maya N; Jacobs, Elizabeth A; Anderson, Cynthie K

    African American mothers and other mothers of historically underserved populations consistently have higher rates of adverse birth outcomes than White mothers. Increasing prenatal care use among these mothers may reduce these disparities. Most prenatal care research focuses on prenatal care adequacy rather than concepts of quality. Even less research examines the dual perspectives of African American mothers and prenatal care providers. In this qualitative study, we compared perceptions of prenatal care quality between African American and mixed race mothers and prenatal care providers. Prenatal care providers (n = 20) and mothers who recently gave birth (n = 19) completed semistructured interviews. Using a thematic analysis approach and Donabedian's conceptual model of health care quality, interviews were analyzed to identify key themes and summarize differences in perspectives between providers and mothers. Mothers and providers valued the tailoring of care based on individual needs and functional patient-provider relationships as key elements of prenatal care quality. Providers acknowledged the need for knowing the social context of patients, but mothers and providers differed in perspectives of "culturally sensitive" prenatal care. Although most mothers had positive prenatal care experiences, mothers also recalled multiple complications with providers' negative assumptions and disregard for mothers' options in care. Exploring strategies to strengthen patient-provider interactions and communication during prenatal care visits remains critical to address for facilitating continuity of care for mothers of color. These findings warrant further investigation of dual patient and provider perspectives of culturally sensitive prenatal care to address the service needs of African American and mixed race mothers. Copyright © 2017 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  16. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  17. LARGE-SCALE TOPOLOGICAL PROPERTIES OF MOLECULAR NETWORKS.

    Energy Technology Data Exchange (ETDEWEB)

    MASLOV,S.SNEPPEN,K.

    2003-11-17

    Bio-molecular networks lack the top-down design. Instead, selective forces of biological evolution shape them from raw material provided by random events such as gene duplications and single gene mutations. As a result individual connections in these networks are characterized by a large degree of randomness. One may wonder which connectivity patterns are indeed random, while which arose due to the network growth, evolution, and/or its fundamental design principles and limitations? Here we introduce a general method allowing one to construct a random null-model version of a given network while preserving the desired set of its low-level topological features, such as, e.g., the number of neighbors of individual nodes, the average level of modularity, preferential connections between particular groups of nodes, etc. Such a null-model network can then be used to detect and quantify the non-random topological patterns present in large networks. In particular, we measured correlations between degrees of interacting nodes in protein interaction and regulatory networks in yeast. It was found that in both these networks, links between highly connected proteins are systematically suppressed. This effect decreases the likelihood of cross-talk between different functional modules of the cell, and increases the overall robustness of a network by localizing effects of deleterious perturbations. It also teaches us about the overall computational architecture of such networks and points at the origin of large differences in the number of neighbors of individual nodes.

  18. Interaction between numbers and size during visual search

    NARCIS (Netherlands)

    Krause, F.; Bekkering, H.; Pratt, J.; Lindemann, O.

    2017-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit

  19. Number and size of nucleoli in the spermatocytes of chicken and Japanese quail.

    Science.gov (United States)

    Andraszek, Katarzyna; Gryzińska, Magdalena; Knaga, Sebastian; Wójcik, Ewa; Smalec, Elzbieta

    2012-01-01

    Nucleoli are the product of nucleolus organizing region activity (NOR) of specific chromosomes. Their basic function is to synthetise ribosomal RNA precursors and promote the maturation and assemblage of preribosomal RNP molecules. Information on rRNA-coding gene activity can be provided by the analysis of the number and size of nucleoli in the prophase of the first meiotic division. The morphology and ultrastructure of a nucleolus depends, among others, on the species and cell growth cycle as well as the physiological and pathological state of an organism. The purpose of this research was to determine the number and size of nucleoli in the spermatocytes of the domestic chicken and the Japanese quail. Diverse numbers and sizes of nucleoli in the cells of the analysed birds were observed. 1-4 nucleoli were identified in chicken cells (1.91 +/- 0.63 on average) and 1-2 in quail cells (1.13 +/- 0.33 on average). For the total of 957 nucleoli observed in Gallus cells, 329 were classified as large and 628 as small. In Coturnix cells, 563 nucleoli were identified (66 large and 497 small ones). An analysis of the numbers and sizes of nucleoli can be performed at the cytogenetic level and serve as an alternative source of information on rRNA encoding gene and nucleolus organising region (NOR) activities.

  20. Efficient Similarity Search Using the Earth Mover's Distance for Large Multimedia Databases

    DEFF Research Database (Denmark)

    Assent, Ira; Wichterich, Marc; Meisen, Tobias

    2008-01-01

    Multimedia similarity search in large databases requires efficient query processing. The Earth mover's distance, introduced in computer vision, is successfully used as a similarity model in a number of small-scale applications. Its computational complexity hindered its adoption in large multimedia...... databases. We enable directly indexing the Earth mover's distance in structures such as the R-tree and the VA-file by providing the accurate 'MinDist' function to any bounding rectangle in the index. We exploit the computational structure of the new MinDist to derive a new lower bound for the EMD Min...

  1. Assessment of small versus large hydro-power developments - a Norwegian case study

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Tor Haakon; Harby, Atle

    2010-07-01

    Full text: The era of new, large hydro-power development projects seems to be over in Norway. Partly as a response to this, a large number of applications for the development of smallscale hydro power projects up to 10 MW overflow the Water Resources and Energy Directorate, resulting in an extensive development of small tributaries and water courses in Norway. This study has developed a framework for the assessment and comparison of several small versus many large hydro-power projects based on a multi-criteria analysis (MCA) approach, and further tested this approach on planned or developed projects in the Helgeland region, Norway. Multi-criteria analysis is a decision-support tool aimed at providing a systematic approach for the comparison of various alternatives with often non-commensurable and conflicting attributes. At the same time, the technique enables complex problems and various alternatives to be assessed in a transparent and simple way. The MCA-software was in our case equipped with 2 overall criteria (objectives) with a number of sub criteria; Production with sub-criteria like volume of energy production, installed effect, storage capacity and economical profit; Environmental impacts with sub-criteria like fishing interests, biodiversity, protection of unexploited nature The data used in the case study is based on the planned development of Vefsna (large project) with the energy/effect production estimated and the environmental impacts identified as part of the feasibility studies (the project never reached the authorities' licensing system with a formal EIA). The small-scale hydro-power projects used for comparison are based on realized projects in the Helgeland region and a number of proposed projects, up scaled to the size of the proposed Vefsna-development. The results from the study indicate that a large number of small-scale hydro-power projects need to be implemented in order to balance the volume of produced electricity/effect from one

  2. Natural Alternatives to Natural Number: The Case of Ratio

    Directory of Open Access Journals (Sweden)

    Percival G. Matthews

    2018-06-01

    Full Text Available The overwhelming majority of efforts to cultivate early mathematical thinking rely primarily on counting and associated natural number concepts. Unfortunately, natural numbers and discretized thinking do not align well with a large swath of the mathematical concepts we wish for children to learn. This misalignment presents an important impediment to teaching and learning. We suggest that one way to circumvent these pitfalls is to leverage students’ non-numerical experiences that can provide intuitive access to foundational mathematical concepts. Specifically, we advocate for explicitly leveraging a students’ perceptually based intuitions about quantity and b students’ reasoning about change and variation, and we address the affordances offered by this approach. We argue that it can support ways of thinking that may at times align better with to-be-learned mathematical ideas, and thus may serve as a productive alternative for particular mathematical concepts when compared to number. We illustrate this argument using the domain of ratio, and we do so from the distinct disciplinary lenses we employ respectively as a cognitive psychologist and as a mathematics education researcher. Finally, we discuss the potential for productive synthesis given the substantial differences in our preferred methods and general epistemologies.

  3. The Price per Prospective Consumer of Providing Therapist Training and Consultation in Seven Evidence-Based Treatments within a Large Public Behavioral Health System: An Example Cost-Analysis Metric

    Directory of Open Access Journals (Sweden)

    Kelsie H. Okamura

    2018-01-01

    Full Text Available ObjectivePublic-sector behavioral health systems seeking to implement evidence-based treatments (EBTs may face challenges selecting EBTs given their limited resources. This study describes and illustrates one method to calculate cost related to training and consultation to assist system-level decisions about which EBTs to select.MethodsTraining, consultation, and indirect labor costs were calculated for seven commonly implemented EBTs. Using extant literature, we then estimated the diagnoses and populations for which each EBT was indicated. Diagnostic and demographic information from Medicaid claims data were obtained from a large behavioral health payer organization and used to estimate the number of covered people with whom the EBT could be used and to calculate implementation-associated costs per consumer.ResultsFindings suggest substantial cost to therapists and service systems related to EBT training and consultation. Training and consultation costs varied by EBT, from Dialectical Behavior Therapy at $238.07 to Cognitive Behavioral Therapy at $0.18 per potential consumer served. Total cost did not correspond with the number of prospective consumers served by an EBT.ConclusionA cost-metric that accounts for the prospective recipients of a given EBT within a given population may provide insight into how systems should prioritize training efforts. Future policy should consider the financial burden of EBT implementation in relation to the context of the population being served and begin a dialog in creating incentives for EBT use.

  4. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  5. Large SNP arrays for genotyping in crop plants

    Indian Academy of Sciences (India)

    Genotyping with large numbers of molecular markers is now an indispensable tool within plant genetics and breeding. Especially through the identification of large numbers of single nucleotide polymorphism (SNP) markers using the novel high-throughput sequencing technologies, it is now possible to reliably identify many ...

  6. Technical Note: Improved CT number stability across patient size using dual-energy CT virtual monoenergetic imaging

    International Nuclear Information System (INIS)

    Michalak, Gregory; Grimes, Joshua; Fletcher, Joel; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia; Halaweish, Ahmed

    2016-01-01

    Purpose: The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Methods: Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kV beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Results: Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. Conclusions: The authors’ report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE

  7. Technical Note: Improved CT number stability across patient size using dual-energy CT virtual monoenergetic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Michalak, Gregory; Grimes, Joshua; Fletcher, Joel; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia, E-mail: mccollough.cynthia@mayo.edu [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Halaweish, Ahmed [Siemens Medical Solutions, Malvern, Pennsylvania 19355 (United States)

    2016-01-15

    Purpose: The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Methods: Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kV beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Results: Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. Conclusions: The authors’ report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.

  8. The Role of Large Enterprises in Museum Digitization

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2014-12-01

    Full Text Available By actively promoting museum digitalization, Japan finds an idiosyncratic way to museum digitalization. The mode of collaboration, in which the government plays a leading role while large enterprises’ R&D capabilities and museum’s cultural dynamics are both allowed to give full play, turns these powerful enterprises into the solid backing of museum digitalization, provides a concrete solution to the common financial and technical challenges museums face in the process. In the course of such collaboration, large enterprises succeed in cultivating a number of talents who understand both business world and museum operation. Thanks to their experiences in the business world, compared with museum professionals, they play a more vital role in marketing the potential commercial exploitation of related digital technologies. The benefits large enterprises could possibly gain from such mode of collaboration - realizing social values, enhancing corporate image, for instance - help to motivate their active involvement, thereby forming a positive cycle of sustainable development.

  9. MPQS with three large primes

    NARCIS (Netherlands)

    Leyland, P.; Lenstra, A.K.; Dodson, B.; Muffett, A.; Wagstaff, S.; Fieker, C.; Kohel, D.R.

    2002-01-01

    We report the factorization of a 135-digit integer by the triple-large-prime variation of the multiple polynomial quadratic sieve. Previous workers [6][10] had suggested that using more than two large primes would be counterproductive, because of the greatly increased number of false reports from

  10. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  11. Elementary number theory with programming

    CERN Document Server

    Lewinter, Marty

    2015-01-01

    A successful presentation of the fundamental concepts of number theory and computer programming Bridging an existing gap between mathematics and programming, Elementary Number Theory with Programming provides a unique introduction to elementary number theory with fundamental coverage of computer programming. Written by highly-qualified experts in the fields of computer science and mathematics, the book features accessible coverage for readers with various levels of experience and explores number theory in the context of programming without relying on advanced prerequisite knowledge and con

  12. Gravity Cutoff in Theories with Large Discrete Symmetries

    International Nuclear Information System (INIS)

    Dvali, Gia; Redi, Michele; Sibiryakov, Sergey; Vainshtein, Arkady

    2008-01-01

    We set an upper bound on the gravitational cutoff in theories with exact quantum numbers of large N periodicity, such as Z N discrete symmetries. The bound stems from black hole physics. It is similar to the bound appearing in theories with N particle species, though a priori, a large discrete symmetry does not imply a large number of species. Thus, there emerges a potentially wide class of new theories that address the hierarchy problem by lowering the gravitational cutoff due to the existence of large Z 10 32 -type symmetries

  13. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  14. Entry control system for large populations

    International Nuclear Information System (INIS)

    Merillat, P.D.

    1982-01-01

    An Entry Control System has been developed which is appropriate for use at an installation with a large population requiring access over a large area. This is accomplished by centralizing the data base management and enrollment functions and decentralizing the guard-assisted, positive personnel identification and access functions. Current information pertaining to all enrollees is maintained through user-friendly enrollment stations. These stations may be used to enroll individuals, alter their area access authorizations, change expiration dates, and other similar functions. An audit trail of data base alterations is provided to the System Manager. Decentrailized systems exist at each area to which access is controlled. The central system provides these systems with the necessary entry control information to allow them to operate microprocessor-driven entry control devices. The system is comprised of commercially available entry control components and is structured such that it will be able to incorporate improved devices as technology porogresses. Currently, access is granted to individuals who possess a valid credential, have current access authorization, can supply a memorized personal identification number, and whose physical hand dimensions match their profile obtained during enrollment. The entry control devices report misuses as security violations to a Guard Alarm Display and Assessment System

  15. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  16. Providing trust and interoperability to federate distributed biobanks.

    Science.gov (United States)

    Lablans, Martin; Bartholomäus, Sebastian; Uckert, Frank

    2011-01-01

    Biomedical research requires large numbers of well annotated, quality-assessed samples which often cannot be provided by a single biobank. Connecting biobanks, researchers and service providers raises numerous challenges including trust among partners and towards the infrastructure as well as interoperability problems. Therefore we develop a holistic, open-source and easy-to-use IT infrastructure. Our federated approach allows partners to reflect their organizational structures and protect their data sovereignty. The search service and the contact arrangement processes increase data sovereignty without stigmatizing for rejecting a specific cooperation. The infrastructure supports daily processes with an integrated basic sample manager and user-definable electronic case report forms. Interfaces for existing IT systems avoid re-entering of data. Moreover, resource virtualization is supported to make underutilized resources of some partners accessible to those with insufficient equipment for mutual benefit. The functionality of the resulting infrastructure is outlined in a use-case to demonstrate collaboration within a translational research network. Compared to other existing or upcoming infrastructures, our approach has ultimately the same goals, but relies on gentle incentives rather than top-down imposed progress.

  17. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  18. The Effectiveness of Instructor Personalized and Formative Feedback Provided by Instructor in an Online Setting: Some Unresolved Issues

    Science.gov (United States)

    Planar, Dolors; Moya, Soledad

    2016-01-01

    Formative feedback has great potential for teaching and learning in online undergraduate programmes. There is a large number of courses where the main source of feedback is provided by the instructor. This is particularly seen in subjects where assessments are designed based on specific activities which are the same for all students, and where the…

  19. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  20. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  1. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  2. Number Line Estimation: The Use of Number Line Magnitude Estimation to Detect the Presence of Math Disability in Postsecondary Students

    Science.gov (United States)

    McDonald, Steven A.

    2010-01-01

    This study arose from an interest in the possible presence of mathematics disabilities among students enrolled in the developmental math program at a large university in the Mid-Atlantic region. Research in mathematics learning disabilities (MLD) has included a focus on the construct of working memory and number sense. A component of number sense…

  3. Three Identities of the Catalan-Qi Numbers

    OpenAIRE

    Mansour Mahmoud; Feng Qi

    2016-01-01

    In the paper, the authors find three new identities of the Catalan-Qi numbers and provide alternative proofs of two identities of the Catalan numbers. The three identities of the Catalan-Qi numbers generalize three identities of the Catalan numbers.

  4. Strategies in filtering in the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2000-01-01

    textabstractA critical step when factoring large integers by the Number Field Sieve consists of finding dependencies in a huge sparse matrix over the field GF(2), using a Block Lanczos algorithm. Both size and weight (the number of non-zero elements) of the matrix critically affect the running time

  5. Estimating the cost of skin cancer detection by dermatology providers in a large health care system.

    Science.gov (United States)

    Matsumoto, Martha; Secrest, Aaron; Anderson, Alyce; Saul, Melissa I; Ho, Jonhan; Kirkwood, John M; Ferris, Laura K

    2018-04-01

    Data on the cost and efficiency of skin cancer detection through total body skin examination are scarce. To determine the number needed to screen (NNS) and biopsy (NNB) and cost per skin cancer diagnosed in a large dermatology practice in patients undergoing total body skin examination. This is a retrospective observational study. During 2011-2015, a total of 20,270 patients underwent 33,647 visits for total body skin examination; 9956 lesion biopsies were performed yielding 2763 skin cancers, including 155 melanomas. The NNS to detect 1 skin cancer was 12.2 (95% confidence interval [CI] 11.7-12.6) and 1 melanoma was 215 (95% CI 185-252). The NNB to detect 1 skin cancer was 3.0 (95% CI 2.9-3.1) and 1 melanoma was 27.8 (95% CI 23.3-33.3). In a multivariable model for NNS, age and personal history of melanoma were significant factors. Age switched from a protective factor to a risk factor at 51 years of age. The estimated cost per melanoma detected was $32,594 (95% CI $27,326-$37,475). Data are from a single health care system and based on physician coding. Melanoma detection through total body skin examination is most efficient in patients ≥50 years of age and those with a personal history of melanoma. Our findings will be helpful in modeling the cost effectiveness of melanoma screening by dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  6. Technology interactions among low-carbon energy technologies: What can we learn from a large number of scenarios?

    International Nuclear Information System (INIS)

    McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.

    2011-01-01

    Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is

  7. The numbers game

    Directory of Open Access Journals (Sweden)

    Oli Brown

    2008-10-01

    Full Text Available Estimates of the potential number of ‘climate changemigrants’ vary hugely. In order to persuade policymakers ofthe need to act and to provide a sound basis for appropriateresponses, there is an urgent need for better analysis, betterdata and better predictions.

  8. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    Directory of Open Access Journals (Sweden)

    Julia Siemann

    2018-04-01

    Full Text Available The clinical profile termed developmental dyscalculia (DD is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD.

  9. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    Science.gov (United States)

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD. PMID:29725316

  10. Experimental determination of Ramsey numbers.

    Science.gov (United States)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  11. Classical theory of algebraic numbers

    CERN Document Server

    Ribenboim, Paulo

    2001-01-01

    Gauss created the theory of binary quadratic forms in "Disquisitiones Arithmeticae" and Kummer invented ideals and the theory of cyclotomic fields in his attempt to prove Fermat's Last Theorem These were the starting points for the theory of algebraic numbers, developed in the classical papers of Dedekind, Dirichlet, Eisenstein, Hermite and many others This theory, enriched with more recent contributions, is of basic importance in the study of diophantine equations and arithmetic algebraic geometry, including methods in cryptography This book has a clear and thorough exposition of the classical theory of algebraic numbers, and contains a large number of exercises as well as worked out numerical examples The Introduction is a recapitulation of results about principal ideal domains, unique factorization domains and commutative fields Part One is devoted to residue classes and quadratic residues In Part Two one finds the study of algebraic integers, ideals, units, class numbers, the theory of decomposition, iner...

  12. An introduction to Catalan numbers

    CERN Document Server

    Roman, Steven

    2015-01-01

    This textbook provides an introduction to the Catalan numbers and their remarkable properties, along with their various applications in combinatorics.  Intended to be accessible to students new to the subject, the book begins with more elementary topics before progressing to more mathematically sophisticated topics.  Each chapter focuses on a specific combinatorial object counted by these numbers, including paths, trees, tilings of a staircase, null sums in Zn+1, interval structures, partitions, permutations, semiorders, and more.  Exercises are included at the end of book, along with hints and solutions, to help students obtain a better grasp of the material.  The text is ideal for undergraduate students studying combinatorics, but will also appeal to anyone with a mathematical background who has an interest in learning about the Catalan numbers. “Roman does an admirable job of providing an introduction to Catalan numbers of a different nature from the previous ones.  He has made an excellent choice o...

  13. Provider-Independent Use of the Cloud

    Science.gov (United States)

    Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron

    Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.

  14. Three Identities of the Catalan-Qi Numbers

    Directory of Open Access Journals (Sweden)

    Mansour Mahmoud

    2016-05-01

    Full Text Available In the paper, the authors find three new identities of the Catalan-Qi numbers and provide alternative proofs of two identities of the Catalan numbers. The three identities of the Catalan-Qi numbers generalize three identities of the Catalan numbers.

  15. CopyNumber450kCancer: baseline correction for accurate copy number calling from the 450k methylation array.

    Science.gov (United States)

    Marzouka, Nour-Al-Dain; Nordlund, Jessica; Bäcklin, Christofer L; Lönnerholm, Gudmar; Syvänen, Ann-Christine; Carlsson Almlöf, Jonas

    2016-04-01

    The Illumina Infinium HumanMethylation450 BeadChip (450k) is widely used for the evaluation of DNA methylation levels in large-scale datasets, particularly in cancer. The 450k design allows copy number variant (CNV) calling using existing bioinformatics tools. However, in cancer samples, numerous large-scale aberrations cause shifting in the probe intensities and thereby may result in erroneous CNV calling. Therefore, a baseline correction process is needed. We suggest the maximum peak of probe segment density to correct the shift in the intensities in cancer samples. CopyNumber450kCancer is implemented as an R package. The package with examples can be downloaded at http://cran.r-project.org nour.marzouka@medsci.uu.se Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  16. Self-correcting random number generator

    Science.gov (United States)

    Humble, Travis S.; Pooser, Raphael C.

    2016-09-06

    A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, to provide a random number according to one or more performance criteria.

  17. Reducing Information Overload in Large Seismic Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.; CARR,DORTHE B.; AGUILAR-CHANG,JULIO

    2000-08-02

    Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their efforts to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research

  18. Production of Numbers about the Future

    DEFF Research Database (Denmark)

    Huikku, Jari; Mouritsen, Jan; Silvola, Hanna

    of prominent Finnish business managers, auditors, analysts, investors, financial supervisory authority, academics and media, the paper extends prior research which has used large data. The paper analyses impairment testing as a process where network of human and non-human actors produce numbers about...

  19. Center-stabilized Yang-Mills Theory:Confinement and Large N Volume Independence

    International Nuclear Information System (INIS)

    Unsal, Mithat; Yaffe, Laurence G.

    2008-01-01

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N 2 ) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R 3 x S 1 with a sufficiently small compactification size L, then an analytic treatment of the non-perturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories

  20. Center-stabilized Yang-Mills theory: Confinement and large N volume independence

    International Nuclear Information System (INIS)

    Uensal, Mithat; Yaffe, Laurence G.

    2008-01-01

    We examine a double trace deformation of SU(N) Yang-Mills theory which, for large N and large volume, is equivalent to unmodified Yang-Mills theory up to O(1/N 2 ) corrections. In contrast to the unmodified theory, large N volume independence is valid in the deformed theory down to arbitrarily small volumes. The double trace deformation prevents the spontaneous breaking of center symmetry which would otherwise disrupt large N volume independence in small volumes. For small values of N, if the theory is formulated on R 3 xS 1 with a sufficiently small compactification size L, then an analytic treatment of the nonperturbative dynamics of the deformed theory is possible. In this regime, we show that the deformed Yang-Mills theory has a mass gap and exhibits linear confinement. Increasing the circumference L or number of colors N decreases the separation of scales on which the analytic treatment relies. However, there are no order parameters which distinguish the small and large radius regimes. Consequently, for small N the deformed theory provides a novel example of a locally four-dimensional pure-gauge theory in which one has analytic control over confinement, while for large N it provides a simple fully reduced model for Yang-Mills theory. The construction is easily generalized to QCD and other QCD-like theories.

  1. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  2. The Brothel Phone Number

    DEFF Research Database (Denmark)

    Korsby, Trine Mygind

    2017-01-01

    Taking a point of departure in negotiations for access to a phone number for a brothel abroad, the article demonstrates how a group of pimps in Eastern Romania attempt to extend their local business into the rest of the EU. The article shows how the phone number works as a micro-infrastructure in......Taking a point of departure in negotiations for access to a phone number for a brothel abroad, the article demonstrates how a group of pimps in Eastern Romania attempt to extend their local business into the rest of the EU. The article shows how the phone number works as a micro...... in turn cultivate and maximize uncertainty about themselves in others. When making the move to go abroad into unknown terrains, accessing the infrastructure generated by the phone number can provide certainty and consolidate one’s position within criminal networks abroad. However, at the same time......, mishandling the phone number can be dangerous and in that sense produce new doubts and uncertainties....

  3. Induction Motor with Switchable Number of Poles and Toroidal Winding

    Directory of Open Access Journals (Sweden)

    MUNTEANU, A.

    2011-05-01

    Full Text Available This paper presents a study of an induction motor provided with toroidal stator winding. The ring-type coils offer a higher versatility in obtaining a different number of pole pairs by means of delta/star and series/parallel connections respectively. As consequence, the developed torque can vary within large limits and the motor can be utilized for applications that require, for example, high load torque values for a short time. The study involves experimental tests and FEM simulation for an induction machine with three configurations of pole pairs. The conclusions attest the superiority of the toroidal winding for certain applications such as electric vehicles or lifting machines.

  4. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  5. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  6. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  7. Evaluation of protection factors provided by full-face masks using man-test method at workplace

    International Nuclear Information System (INIS)

    Izumi, Yukio; Kinouchi, Nobuyuki; Ikezawa, Yoshio.

    1994-01-01

    From a practical angle of view to estimate the protection factors (PFs) provided by full-face masks, a number of protection factors were measured with a man-test apparatus just before the wearers started to do radiation work in radiation controlled area. PFs of the total number of 2,279 cases were measured under five simulated working conditions. The measured PFs were widely distributed from 2.3 to 6,700. About 95% of workers obtained PFs more than 50, and about 64% showed much higher PFs more than 1,000 due to good fitting. In the case of some persons, the measured PFs irregularly varied and changed to a large degree. This method is a reliable technique that has been confirmed to protect unexpected internal exposure. From the results obtained, the method should be necessary to provide a better mask and higher PF for each worker. (author)

  8. A Numbers Game

    DEFF Research Database (Denmark)

    Levin, Bruce R; McCall, Ingrid C.; Perrot, Veronique

    2017-01-01

    We postulate that the inhibition of growth and low rates of mortality of bacteria exposed to ribosome-binding antibiotics deemed bacteriostatic can be attributed almost uniquely to these drugs reducing the number of ribosomes contributing to protein synthesis, i.e., the number of effective......-targeting bacteriostatic antibiotics, the time before these bacteria start to grow again when the drugs are removed, referred to as the post-antibiotic effect (PAE), is markedly greater for constructs with fewer rrn operons than for those with more rrn operons. We interpret the results of these other experiments reported...... here as support for the hypothesis that the reduction in the effective number of ribosomes due to binding to these structures provides a sufficient explanation for the action of bacteriostatic antibiotics that target these structures....

  9. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  10. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  11. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  12. 19 CFR 24.5 - Filing identification number.

    Science.gov (United States)

    2010-04-01

    ... TREASURY CUSTOMS FINANCIAL AND ACCOUNTING PROCEDURE § 24.5 Filing identification number. (a) Generally..., the Social Security number. (2) If neither an Internal Revenue Service employer identification number nor a Social Security number has been assigned, the word “None” shall be written on the line provided...

  13. Is environmental sustainability a strategic priority for logistics service providers?

    Science.gov (United States)

    Evangelista, Pietro; Colicchia, Claudia; Creazza, Alessandro

    2017-08-01

    Despite an increasing number of third-party logistics service providers (3PLs) regard environmental sustainability as a key area of management, there is still great uncertainty on how 3PLs implement environmental strategies and on how they translate green efforts into practice. Through a multiple case study analysis, this paper explores the environmental strategies of a sample of medium-sized 3PLs operating in Italy and the UK, in terms of environmental organizational culture, initiatives, and influencing factors. Our analysis shows that, notwithstanding environmental sustainability is generally recognised as a strategic priority, a certain degree of diversity in the deployment of environmental strategies still exists. This paper is original since the extant literature on green strategies of 3PLs provides findings predominantly from a single country perspective and mainly investigates large/multinational organizations. It also provides indications to help managers of medium-sized 3PLs in positioning their business. This is particularly meaningful in the 3PL industry, where medium-sized organizations significantly contribute to the generated turnover and market value. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Advanced manipulator system for large hot cells

    International Nuclear Information System (INIS)

    Vertut, J.; Moreau, C.; Brossard, J.P.

    1981-01-01

    Large hot cells can be approached as extrapolated from smaller ones as wide, higher or longer in size with the same concept of using mechanical master slave manipulators and high density windows. This concept leads to a large number of working places and corresponding equipments, with a number of penetrations through the biological protection. When the large cell does not need a permanent operation of number of work places, as in particular to serve PIE machines and maintain the facility, use of servo manipulators with a large supporting unit and extensive use of television appears optimal. The advance on MA 23 and supports will be described including the extra facilities related to manipulators introduction and maintenance. The possibility to combine a powered manipulator and MA 23 (single or pair) on the same boom crane system will be described. An advance control system to bring the minimal dead time to control support movement, associated to the master slave arm operation is under development. The general television system includes over view cameras, associated with the limited number of windows, and manipulators camera. A special new system will be described which brings an automatic control of manipulator cameras and saves operator load and dead time. Full scale tests with MA 23 and support will be discussed. (author)

  15. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  16. On designing of a low leakage patient-centric provider network.

    Science.gov (United States)

    Zheng, Yuchen; Lin, Kun; White, Thomas; Pickreign, Jeremy; Yuen-Reed, Gigi

    2018-03-27

    detect leakage, gaining insight on how to minimize leakage. We identify patient-driven provider organization by surfacing providers who share a large number of patients. By analyzing the import-export behavior of each identified community using a novel approach and profiling community patient and provider composition we understand the key features of having a balanced number of PCP and specialists and provider heterogeneity.

  17. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.

  18. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    International Nuclear Information System (INIS)

    Chiu, J; Ma, L

    2015-01-01

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume

  19. Monte Carlo reactor calculation with substantially reduced number of cycles

    International Nuclear Information System (INIS)

    Lee, M. J.; Joo, H. G.; Lee, D.; Smith, K.

    2012-01-01

    A new Monte Carlo (MC) eigenvalue calculation scheme that substantially reduces the number of cycles is introduced with the aid of coarse mesh finite difference (CMFD) formulation. First, it is confirmed in terms of pin power errors that using extremely many particles resulting in short active cycles is beneficial even in the conventional MC scheme although wasted operations in inactive cycles cannot be reduced with more particles. A CMFD-assisted MC scheme is introduced as an effort to reduce the number of inactive cycles and the fast convergence behavior and reduced inter-cycle effect of the CMFD assisted MC calculation is investigated in detail. As a practical means of providing a good initial fission source distribution, an assembly based few-group condensation and homogenization scheme is introduced and it is shown that efficient MC eigenvalue calculations with fewer than 20 total cycles (including inactive cycles) are possible for large power reactor problems. (authors)

  20. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  1. Experimental study on the Reynolds number dependence of turbulent mixing in a rod bundle

    International Nuclear Information System (INIS)

    Silin, Nicolas; Juanico, Luis

    2006-01-01

    An experimental study for Reynolds number dependence of the turbulent mixing between fuel-bundle subchannels, was performed. The measurements were done on a triangular array bundle with a 1.20 pitch to diameter relation and 10 mm rod diameter, in a low-pressure water loop, at Reynolds numbers between 1.4 x 10 3 and 1.3 x 10 5 . The high accuracy of the results was obtained by improving a thermal tracing technique recently developed. The Reynolds exponent on the mixing rate correlation was obtained with two-digit accuracy for Reynolds numbers greater than 3 x 10 3 . It was also found a marked increase in the mixing rate for lower Reynolds numbers. The weak theoretical base of the accepted Reynolds dependence was pointed out in light of the later findings, as well as its ambiguous supporting experimental data. The present results also provide indirect information about dominant large scale flow pulsations at different flow regimes

  2. An analytical solution of Richards' equation providing the physical basis of SCS curve number method and its proportionality relationship

    Science.gov (United States)

    Hooshyar, Milad; Wang, Dingbao

    2016-08-01

    The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.

  3. The natural number bias and its role in rational number understanding in children with dyscalculia. Delay or deficit?

    Science.gov (United States)

    Van Hoof, Jo; Verschaffel, Lieven; Ghesquière, Pol; Van Dooren, Wim

    2017-12-01

    Previous research indicated that in several cases learners' errors on rational number tasks can be attributed to learners' tendency to (wrongly) apply natural number properties. There exists a large body of literature both on learners' struggle with understanding the rational number system and on the role of the natural number bias in this struggle. However, little is known about this phenomenon in learners with dyscalculia. We investigated the rational number understanding of learners with dyscalculia and compared it with the rational number understanding of learners without dyscalculia. Three groups of learners were included: sixth graders with dyscalculia, a chronological age match group, and an ability match group. The results showed that the rational number understanding of learners with dyscalculia is significantly lower than that of typically developing peers, but not significantly different from younger learners, even after statistically controlling for mathematics achievement. Next to a delay in their mathematics achievement, learners with dyscalculia seem to have an extra delay in their rational number understanding, compared with peers. This is especially the case in those rational number tasks where one has to inhibit natural number knowledge to come to the right answer. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  5. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  6. IEEE Standard for Floating Point Numbers

    Indian Academy of Sciences (India)

    IAS Admin

    Floating point numbers are an important data type in compu- tation which is used ... quite large! Integers are ... exp, the value of the exponent will be taken as (exp –127). The ..... bit which is truncated is 1, add 1 to the least significant bit, else.

  7. A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation

    OpenAIRE

    Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal

    2017-01-01

    Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...

  8. A large scale flexible real-time communications topology for the LHC accelerator

    CERN Document Server

    Lauckner, R J; Ribeiro, P; Wijnands, Thijs

    1999-01-01

    The LHC design parameters impose very stringent beam control requirements in order to reach the nominal performance. Prompted by the lack of accurate models to predict field behaviour in superconducting magnet systems the control system of the accelerator will provide flexible feedback channels between monitors and magnets around the 27 Km circumference machine. The implementation of feedback systems composed of a large number of sparsely located elements presents some interesting challenges. Our goal was to find a topology where the control loop requirements: number and distribution of nodes, latency and throughput could be guaranteed without compromising the flexibility. Our proposal is to federate a number of well known technologies and concepts, namely ATM, WorldFIP and RTOS, into a general framework. (6 refs).

  9. The large Reynolds number - Asymptotic theory of turbulent boundary layers.

    Science.gov (United States)

    Mellor, G. L.

    1972-01-01

    A self-consistent, asymptotic expansion of the one-point, mean turbulent equations of motion is obtained. Results such as the velocity defect law and the law of the wall evolve in a relatively rigorous manner, and a systematic ordering of the mean velocity boundary layer equations and their interaction with the main stream flow are obtained. The analysis is extended to the turbulent energy equation and to a treatment of the small scale equilibrium range of Kolmogoroff; in velocity correlation space the two-thirds power law is obtained. Thus, the two well-known 'laws' of turbulent flow are imbedded in an analysis which provides a great deal of other information.

  10. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  11. Talking probabilities: communicating probabilistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to

  12. Changing the Face of Traditional Education: A Framework for Adapting a Large, Residential Course to the Web

    Directory of Open Access Journals (Sweden)

    Maureen Ellis

    2007-07-01

    Full Text Available At large, research universities, a common approach for teaching hundreds of undergraduate students at one time is the traditional, large, lecture-based course. Trends indicate that over the next decade there will be an increase in the number of large, campus courses being offered as well as larger enrollments in courses currently offered. As universities investigate alternative means to accommodate more students and their learning needs, Web-based instruction provides an attractive delivery mode for teaching large, on-campus courses. This article explores a theoretical approach regarding how Web-based instruction can be designed and developed to provide quality education for traditional, on-campus, undergraduate students. The academic debate over the merit of Web-based instruction for traditional, on-campus students has not been resolved. This study identifies and discusses instructional design theory for adapting a large, lecture-based course to the Web.

  13. Kernel methods for large-scale genomic data analysis

    Science.gov (United States)

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  14. Providing open hydrological data for decision making and research - hypeweb.smhi.se

    Science.gov (United States)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Following the EU open data strategy the Swedish Meteorological and Hydrological Institute (SMHI) is providing large parts of their databases openly available. These data are ranging from historical observations to climate predictions in various areas such as weather, oceanography and hydrology. In this presentation we will focus on the work on making hydrological data openly available. Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). In the presentation we will focus on how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. So far, the following regional domains have been modelled with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The model output is launched as new Open Data at the web site www.hypeweb.smhi.se. The web site provides several interactive applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model towards observed river flow. The available results can be used for many different purposes including; (i) Climate change impact assessments on water

  15. 76 FR 79607 - Local Number Portability Porting Interval and Validation Requirements; Telephone Number Portability

    Science.gov (United States)

    2011-12-22

    ... customer's account; a positive indication that the new service provider has the authority from the customer... comments. Email: [email protected] , and include the following words in the body of the message, ``get form.'' A... telephone number associated with the customer's account; a positive indication that the new service provider...

  16. Weak coupling large-N transitions at finite baryon density

    NARCIS (Netherlands)

    Hollowood, Timothy J.; Kumar, S. Prem; Myers, Joyce C.

    We study thermodynamics of free SU(N) gauge theory with a large number of colours and flavours on a three-sphere, in the presence of a baryon number chemical potential. Reducing the system to a holomorphic large-N matrix integral, paying specific attention to theories with scalar flavours (squarks),

  17. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    International Nuclear Information System (INIS)

    Rao, Nageswara S; Carter, Steven M; Wu Qishi; Wing, William R; Zhu Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts

  18. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Carter, Steven M [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wu Qishi [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wing, William R [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Zhu Mengxia [Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803 (United States); Mezzacappa, Anthony [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Veeraraghavan, Malathi [Department of Computer Science, University of Virginia, Charlottesville, VA 22904 (United States); Blondin, John M [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States)

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  19. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  20. The CTIO surveys for large redshift quasars

    International Nuclear Information System (INIS)

    Osmer, P.S.

    1978-01-01

    Lyman α emission in large redshift quasars is readily detectable on slitless spectrograms taken with an objective combination on the 4m telescope. This provides a new survey method, independent of color for finding radio-quiet quasars in large numbers. Surveys by Smith with the Curtis Schmidt and Hoag and Smith with the 4 m telescope, have produced more than 200 candidates with 1.5< z<3.5 and 16< m<21. Spectroscopic observations with the CTIO SIT vidicon system have been carried out for more than 50 of the candidates, with the result that the basic properties of the surveys are known. To date three 16th magnitude quasars with zapproximately2.2 and six quasars with 3.0< z<3.25 have been found. One of the most important uses of the surveys will be the determination of the surface and surface densities of large redshift quasars. A preliminary analysis of the data indicates that the space density of quasars is at least constant, if not increasing, over the interval 1.0< z<3.25. However, the Hoag-Smith sample has only one candidate with z<3.2.(Auth.)

  1. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  2. Large-Scale Survey Findings Inform Patients’ Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment

    Science.gov (United States)

    Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-01-01

    Background Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users’ experiences in using secure email messaging. Objective To quantitatively assess veteran patients’ experiences in using secure email messaging in a large patient sample. Methods A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations’ Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. Results The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy

  3. Large-Scale Survey Findings Inform Patients' Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment.

    Science.gov (United States)

    Haun, Jolie N; Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-12-21

    Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users' experiences in using secure email messaging. To quantitatively assess veteran patients' experiences in using secure email messaging in a large patient sample. A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations' Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy-to-use (P≤.001) communication tool, compared

  4. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  5. Altermatic number of categorical product of graphs

    DEFF Research Database (Denmark)

    Alishahi, Meysam; Hajiabolhassan, Hossein

    2018-01-01

    In this paper, we prove some relaxations of Hedetniemi's conjecture in terms of altermatic number and strong altermatic number of graphs, two combinatorial parameters introduced by the present authors Alishahi and Hajiabolhassan (2015) providing two sharp lower bounds for the chromatic number of ...

  6. Graphing Powers and Roots of Complex Numbers.

    Science.gov (United States)

    Embse, Charles Vonder

    1993-01-01

    Using De Moivre's theorem and a parametric graphing utility, examines powers and roots of complex numbers and allows students to establish connections between the visual and numerical representations of complex numbers. Provides a program to numerically verify the roots of complex numbers. (MDH)

  7. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  8. Measuring the topology of large-scale structure in the universe

    Science.gov (United States)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  9. Measuring the topology of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Gott, J.R. III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references

  10. High Reynolds Number Turbulence

    National Research Council Canada - National Science Library

    Smits, Alexander J

    2007-01-01

    The objectives of the grant were to provide a systematic study to fill the gap between existing research on low Reynolds number turbulent flows to the kinds of turbulent flows encountered on full-scale vehicles...

  11. Analysis of ultrasonic beam profile due to change of elements' number for phased array transducer (part 2)

    International Nuclear Information System (INIS)

    Choi, Sang Woo; Lee, Joon Hyun

    1998-01-01

    The phased array offers many advantages and improvements over conventional single-element transducers such as the straight-beam and angle-beam. The advantages of array sensors for large structures are two folds; firstly, array transducers provide a method of rapid beam steering and sequential addressing of a large area of interest without requiring mechanical or manual scanning which is particularly important in real-time application. Secondly, array transducer provide a method of dynamic focusing, in which the focal length of the ultrasonic beam varies as the pulse propagates through the material. There are some parameters such as number, size, center to center space of elements to design phased array transducer. In previous study. the characteristics of beam steering and dynamic focusing had been simulated for ultrasonic SH-wave with varying the number of phased array transducer's element. In this study, the characteristic of beam steering for phased array transducer has been simulated for ultrasonic SH-wave on the basis of Huygen's principle with varying center to center space of elements. Ultrasonic beam directivity and focusing due to change of time delay of each element were discussed with varying center to center space of elements.

  12. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  13. Novel mitochondrial extensions provide evidence for a link between microtubule-directed movement and mitochondrial fission

    International Nuclear Information System (INIS)

    Bowes, Timothy; Gupta, Radhey S.

    2008-01-01

    Mitochondrial dynamics play an important role in a large number of cellular processes. Previously, we reported that treatment of mammalian cells with the cysteine-alkylators, N-ethylmaleimide and ethacrynic acid, induced rapid mitochondrial fusion forming a large reticulum approximately 30 min after treatment. Here, we further investigated this phenomenon using a number of techniques including live-cell confocal microscopy. In live cells, drug-induced fusion coincided with a cessation of fast mitochondrial movement which was dependent on microtubules. During this loss of movement, thin mitochondrial tubules extending from mitochondria were also observed, which we refer to as 'mitochondrial extensions'. The formation of these mitochondrial extensions, which were not observed in untreated cells, depended on microtubules and was abolished by pretreatment with nocodazole. In this study, we provide evidence that these extensions result from of a block in mitochondrial fission combined with continued application of motile force by microtubule-dependent motor complexes. Our observations strongly suggest the existence of a link between microtubule-based mitochondrial trafficking and mitochondrial fission

  14. Copy number variation analysis of matched ovarian primary tumors and peritoneal metastasis.

    Directory of Open Access Journals (Sweden)

    Joel A Malek

    Full Text Available Ovarian cancer is the most deadly gynecological cancer. The high rate of mortality is due to the large tumor burden with extensive metastatic lesion of the abdominal cavity. Despite initial chemosensitivity and improved surgical procedures, abdominal recurrence remains an issue and results in patients' poor prognosis. Transcriptomic and genetic studies have revealed significant genome pathologies in the primary tumors and yielded important information regarding carcinogenesis. There are, however, few studies on genetic alterations and their consequences in peritoneal metastatic tumors when compared to their matched ovarian primary tumors. We used high-density SNP arrays to investigate copy number variations in matched primary and metastatic ovarian cancer from 9 patients. Here we show that copy number variations acquired by ovarian tumors are significantly different between matched primary and metastatic tumors and these are likely due to different functional requirements. We show that these copy number variations clearly differentially affect specific pathways including the JAK/STAT and cytokine signaling pathways. While many have shown complex involvement of cytokines in the ovarian cancer environment we provide evidence that ovarian tumors have specific copy number variation differences in many of these genes.

  15. Reynolds-number dependence of turbulence enhancement on collision growth

    Directory of Open Access Journals (Sweden)

    R. Onishi

    2016-10-01

    Full Text Available This study investigates the Reynolds-number dependence of turbulence enhancement on the collision growth of cloud droplets. The Onishi turbulent coagulation kernel proposed in Onishi et al. (2015 is updated by using the direct numerical simulation (DNS results for the Taylor-microscale-based Reynolds number (Reλ up to 1140. The DNS results for particles with a small Stokes number (St show a consistent Reynolds-number dependence of the so-called clustering effect with the locality theory proposed by Onishi et al. (2015. It is confirmed that the present Onishi kernel is more robust for a wider St range and has better agreement with the Reynolds-number dependence shown by the DNS results. The present Onishi kernel is then compared with the Ayala–Wang kernel (Ayala et al., 2008a; Wang et al., 2008. At low and moderate Reynolds numbers, both kernels show similar values except for r2 ∼ r1, for which the Ayala–Wang kernel shows much larger values due to its large turbulence enhancement on collision efficiency. A large difference is observed for the Reynolds-number dependences between the two kernels. The Ayala–Wang kernel increases for the autoconversion region (r1, r2 < 40 µm and for the accretion region (r1 < 40 and r2 > 40 µm; r1 > 40 and r2 < 40 µm as Reλ increases. In contrast, the Onishi kernel decreases for the autoconversion region and increases for the rain–rain self-collection region (r1, r2 > 40 µm. Stochastic collision–coalescence equation (SCE simulations are also conducted to investigate the turbulence enhancement on particle size evolutions. The SCE with the Ayala–Wang kernel (SCE-Ayala and that with the present Onishi kernel (SCE-Onishi are compared with results from the Lagrangian Cloud Simulator (LCS; Onishi et al., 2015, which tracks individual particle motions and size evolutions in homogeneous isotropic turbulence. The SCE-Ayala and SCE-Onishi kernels show consistent

  16. 28 CFR 16.53 - Use and collection of social security numbers.

    Science.gov (United States)

    2010-07-01

    ... 1974 § 16.53 Use and collection of social security numbers. Each component shall ensure that employees... privilege as a result of refusing to provide their social security numbers, unless the collection is... provide their social security numbers must be informed of: (1) Whether providing social security numbers...

  17. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  18. Podocyte Number in Children and Adults: Associations with Glomerular Size and Numbers of Other Glomerular Resident Cells

    Science.gov (United States)

    Puelles, Victor G.; Douglas-Denton, Rebecca N.; Cullen-McEwen, Luise A.; Li, Jinhua; Hughson, Michael D.; Hoy, Wendy E.; Kerr, Peter G.

    2015-01-01

    Increases in glomerular size occur with normal body growth and in many pathologic conditions. In this study, we determined associations between glomerular size and numbers of glomerular resident cells, with a particular focus on podocytes. Kidneys from 16 male Caucasian-Americans without overt renal disease, including 4 children (≤3 years old) to define baseline values of early life and 12 adults (≥18 years old), were collected at autopsy in Jackson, Mississippi. We used a combination of immunohistochemistry, confocal microscopy, and design-based stereology to estimate individual glomerular volume (IGV) and numbers of podocytes, nonepithelial cells (NECs; tuft cells other than podocytes), and parietal epithelial cells (PECs). Podocyte density was calculated. Data are reported as medians and interquartile ranges (IQRs). Glomeruli from children were small and contained 452 podocytes (IQR=335–502), 389 NECs (IQR=265–498), and 146 PECs (IQR=111–206). Adult glomeruli contained significantly more cells than glomeruli from children, including 558 podocytes (IQR=431–746; P<0.01), 1383 NECs (IQR=998–2042; P<0.001), and 367 PECs (IQR=309–673; P<0.001). However, large adult glomeruli showed markedly lower podocyte density (183 podocytes per 106 µm3) than small glomeruli from adults and children (932 podocytes per 106 µm3; P<0.001). In conclusion, large adult glomeruli contained more podocytes than small glomeruli from children and adults, raising questions about the origin of these podocytes. The increased number of podocytes in large glomeruli does not match the increase in glomerular size observed in adults, resulting in relative podocyte depletion. This may render hypertrophic glomeruli susceptible to pathology. PMID:25568174

  19. Quantum random number generator

    Science.gov (United States)

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  20. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  1. Plume structure in high-Rayleigh-number convection

    Science.gov (United States)

    Puthenveettil, Baburaj A.; Arakeri, Jaywant H.

    2005-10-01

    Near-wall structures in turbulent natural convection at Rayleigh numbers of 10^{10} to 10^{11} at A Schmidt number of 602 are visualized by a new method of driving the convection across a fine membrane using concentration differences of sodium chloride. The visualizations show the near-wall flow to consist of sheet plumes. A wide variety of large-scale flow cells, scaling with the cross-section dimension, are observed. Multiple large-scale flow cells are seen at aspect ratio (AR)= 0.65, while only a single circulation cell is detected at AR= 0.435. The cells (or the mean wind) are driven by plumes coming together to form columns of rising lighter fluid. The wind in turn aligns the sheet plumes along the direction of shear. the mean wind direction is seen to change with time. The near-wall dynamics show plumes initiated at points, which elongate to form sheets and then merge. Increase in rayleigh number results in a larger number of closely and regularly spaced plumes. The plume spacings show a common log normal probability distribution function, independent of the rayleigh number and the aspect ratio. We propose that the near-wall structure is made of laminar natural-convection boundary layers, which become unstable to give rise to sheet plumes, and show that the predictions of a model constructed on this hypothesis match the experiments. Based on these findings, we conclude that in the presence of a mean wind, the local near-wall boundary layers associated with each sheet plume in high-rayleigh-number turbulent natural convection are likely to be laminar mixed convection type.

  2. Identification of rare recurrent copy number variants in high-risk autism families and their prevalence in a large ASD population.

    Directory of Open Access Journals (Sweden)

    Nori Matsunami

    Full Text Available Structural variation is thought to play a major etiological role in the development of autism spectrum disorders (ASDs, and numerous studies documenting the relevance of copy number variants (CNVs in ASD have been published since 2006. To determine if large ASD families harbor high-impact CNVs that may have broader impact in the general ASD population, we used the Affymetrix genome-wide human SNP array 6.0 to identify 153 putative autism-specific CNVs present in 55 individuals with ASD from 9 multiplex ASD pedigrees. To evaluate the actual prevalence of these CNVs as well as 185 CNVs reportedly associated with ASD from published studies many of which are insufficiently powered, we designed a custom Illumina array and used it to interrogate these CNVs in 3,000 ASD cases and 6,000 controls. Additional single nucleotide variants (SNVs on the array identified 25 CNVs that we did not detect in our family studies at the standard SNP array resolution. After molecular validation, our results demonstrated that 15 CNVs identified in high-risk ASD families also were found in two or more ASD cases with odds ratios greater than 2.0, strengthening their support as ASD risk variants. In addition, of the 25 CNVs identified using SNV probes on our custom array, 9 also had odds ratios greater than 2.0, suggesting that these CNVs also are ASD risk variants. Eighteen of the validated CNVs have not been reported previously in individuals with ASD and three have only been observed once. Finally, we confirmed the association of 31 of 185 published ASD-associated CNVs in our dataset with odds ratios greater than 2.0, suggesting they may be of clinical relevance in the evaluation of children with ASDs. Taken together, these data provide strong support for the existence and application of high-impact CNVs in the clinical genetic evaluation of children with ASD.

  3. Control and large deformations of marginal disordered structures

    Science.gov (United States)

    Murugan, Arvind; Pinson, Matthew; Chen, Elizabeth

    Designed deformations, such as origami patterns, provide a way to make easily controlled mechanical metamaterials with tailored responses to external forces. We focus on an often overlooked regime of origami - non-linear deformations of large disordered origami patterns with no symmetries. We find that practical questions of control in origami have counterintuitive answers, because of intimate connections to spin glasses and neural networks. For example, 1 degree of freedom origami structures are actually difficult to control about the flat state with a single actuator; the actuator is thrown off by an exponential number of `red herring' zero modes for small deformations, all but one of which disappear at larger deformations. Conversely, structures with multiple programmed motions are much easier to control than expected - in fact, they are as easy to control as a dedicated single-motion structure if the number of programmed motions is below a threshold (`memory capacity').

  4. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  5. Decision support system of e-book provider selection for library using Simple Additive Weighting

    Science.gov (United States)

    Ciptayani, P. I.; Dewi, K. C.

    2018-01-01

    Each library has its own criteria and differences in the importance of each criterion in choosing an e-book provider for them. The large number of providers and the different importance levels of each criterion make the problem of determining the e-book provider to be complex and take a considerable time in decision making. The aim of this study was to implement Decision support system (DSS) to assist the library in selecting the best e-book provider based on their preferences. The way of DSS works is by comparing the importance of each criterion and the condition of each alternative decision. SAW is one of DSS method that is quite simple, fast and widely used. This study used 9 criteria and 18 provider to demonstrate how SAW work in this study. With the DSS, then the decision-making time can be shortened and the calculation results can be more accurate than manual calculations.

  6. Dragon kings of the deep sea: marine particles deviate markedly from the common number-size spectrum.

    Science.gov (United States)

    Bochdansky, Alexander B; Clouse, Melissa A; Herndl, Gerhard J

    2016-03-04

    Particles are the major vector for the transfer of carbon from the upper ocean to the deep sea. However, little is known about their abundance, composition and role at depths greater than 2000 m. We present the first number-size spectrum of bathy- and abyssopelagic particles to a depth of 5500 m based on surveys performed with a custom-made holographic microscope. The particle spectrum was unusual in that particles of several millimetres in length were almost 100 times more abundant than expected from the number spectrum of smaller particles, thereby meeting the definition of "dragon kings." Marine snow particles overwhelmingly contributed to the total particle volume (95-98%). Approximately 1/3 of the particles in the dragon-king size domain contained large amounts of transparent exopolymers with little ballast, which likely either make them neutrally buoyant or cause them to sink slowly. Dragon-king particles thus provide large volumes of unique microenvironments that may help to explain discrepancies in deep-sea biogeochemical budgets.

  7. Association tests and software for copy number variant data

    Directory of Open Access Journals (Sweden)

    Plagnol Vincent

    2009-01-01

    Full Text Available Abstract Recent studies have suggested that copy number variation (CNV significantly contributes to genetic predisposition to several common disorders. These findings, combined with the imperfect tagging of CNVs by single nucleotide polymorphisms (SNPs, have motivated the development of association studies directly targeting CNVs. Several assays, including comparative genomic hybridisation arrays, SNP genotyping arrays, or DNA quantification through real-time polymerase chain reaction analysis, allow direct assessment of CNV status in cohorts sufficiently large to provide adequate statistical power for association studies. When analysing data provided by these assays, association tests for CNV data are not fundamentally different from SNP-based association tests. The main difference arises when the quality of the CNV assay is not sufficient to convert unequivocally the raw measurement into discrete calls -- a common issue, given the technological limitations of current CNV assays. When this is the case, association tests are more appropriately based on the raw continuous measurement provided by the CNV assay, instead of potentially inaccurate discrete calls, thus motivating the development of new statistical methods. Here, the programs available for CNV association testing for case control or family data are reviewed, using either discrete calls or raw continuous data.

  8. Beyond left and right: Automaticity and flexibility of number-space associations.

    Science.gov (United States)

    Antoine, Sophie; Gevers, Wim

    2016-02-01

    Close links exist between the processing of numbers and the processing of space: relatively small numbers are preferentially associated with a left-sided response while relatively large numbers are associated with a right-sided response (the SNARC effect). Previous work demonstrated that the SNARC effect is triggered in an automatic manner and is highly flexible. Besides the left-right dimension, numbers associate with other spatial response mappings such as close/far responses, where small numbers are associated with a close response and large numbers with a far response. In two experiments we investigate the nature of this association. Associations between magnitude and close/far responses were observed using a magnitude-irrelevant task (Experiment 1: automaticity) and using a variable referent task (Experiment 2: flexibility). While drawing a strong parallel between both response mappings, the present results are also informative with regard to the question about what type of processing mechanism underlies both the SNARC effect and the association between numerical magnitude and close/far response locations.

  9. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  10. Small numbers are sensed directly, high numbers constructed from size and density.

    Science.gov (United States)

    Zimmermann, Eckart

    2018-04-01

    Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    Science.gov (United States)

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Detection of Supernova Neutrinos on the Earth for Large θ13

    Science.gov (United States)

    Xu, Jing; Huang, Ming-Yang; Hu, Li-Jun; Guo, Xin-Heng; Young, Bing-Lin

    2014-02-01

    Supernova (SN) neutrinos detected on the Earth are subject to the shock wave effects, the Mikheyev—Smirnov—Wolfenstein (MSW) effects, the neutrino collective effects and the Earth matter effects. Considering the recent experimental result about the large mixing angle θ13 (≃ 8.8°) provided by the Daya Bay Collaboration and applying the available knowledge for the neutrino conversion probability in the high resonance region of SN, PH, which is in the form of hypergeometric function in the case of large θ13, we deduce the expression of PH taking into account the shock wave effects. It is found that PH is not zero in a certain range of time due to the shock wave effects. After considering all the four physical effects and scanning relevant parameters, we calculate the event numbers of SN neutrinos for the “Garching” distribution of neutrino energy spectrum. From the numerical results, it is found that the behaviors of neutrino event numbers detected on the Earth depend on the neutrino mass hierarchy and neutrino spectrum parameters including the dimensionless pinching parameter βα (where α refers to neutrino flavor), the average energy , and the SN neutrino luminosities Lα. Finally, we give the ranges of SN neutrino event numbers that will be detected at the Daya Bay experiment.

  13. Higher-order Nielsen numbers

    Directory of Open Access Journals (Sweden)

    Saveliev Peter

    2005-01-01

    Full Text Available Suppose , are manifolds, are maps. The well-known coincidence problem studies the coincidence set . The number is called the codimension of the problem. More general is the preimage problem. For a map and a submanifold of , it studies the preimage set , and the codimension is . In case of codimension , the classical Nielsen number is a lower estimate of the number of points in changing under homotopies of , and for an arbitrary codimension, of the number of components of . We extend this theory to take into account other topological characteristics of . The goal is to find a "lower estimate" of the bordism group of . The answer is the Nielsen group defined as follows. In the classical definition, the Nielsen equivalence of points of based on paths is replaced with an equivalence of singular submanifolds of based on bordisms. We let , then the Nielsen group of order is the part of preserved under homotopies of . The Nielsen number of order is the rank of this group (then . These numbers are new obstructions to removability of coincidences and preimages. Some examples and computations are provided.

  14. Energy transfers in dynamos with small magnetic Prandtl numbers

    KAUST Repository

    Kumar, Rohit

    2015-06-25

    We perform numerical simulation of dynamo with magnetic Prandtl number Pm = 0.2 on 10243 grid, and compute the energy fluxes and the shell-to-shell energy transfers. These computations indicate that the magnetic energy growth takes place mainly due to the energy transfers from large-scale velocity field to large-scale magnetic field and that the magnetic energy flux is forward. The steady-state magnetic energy is much smaller than the kinetic energy, rather than equipartition; this is because the magnetic Reynolds number is near the dynamo transition regime. We also contrast our results with those for dynamo with Pm = 20 and decaying dynamo. © 2015 Taylor & Francis.

  15. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  16. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    Science.gov (United States)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random

  17. Limits on hypothesizing new quantum numbers

    International Nuclear Information System (INIS)

    Goldstein, G.R.; Moravcsik, M.J.

    1986-01-01

    According to a recent theorem, for a general quantum-mechanical system undergoing a process, one can tell from measurements on this system whether or not it is characterized by a quantum number, the existence of which is unknown to the observer, even though the detecting equipment used by the observer is unable to distinguish among the various possible values of the ''secret'' quantum number and hence always averages over them. The present paper deals with situations in which this averaging is avoided and hence the ''secret'' quantum number remains ''secret.'' This occurs when a new quantum number is hypothesized in such a way that all the past measurements pertain to the system with one and the same value of the ''secret'' quantum number, or when the new quantum number is related to the old ones by a specific dynamical model providing a one-to-one correspondence. In the first of these cases, however, the one and the same state of the ''secret'' quantum number needs to be a nondegenerate one. If it is degenerate, the theorem can again be applied. This last feature provides a tool for experimentally testing symmetry breaking and the reestablishment of symmetries in asymptotic regions. The situation is illustrated on historical examples like isospin and strangeness, as well as on some contemporary schemes involving spaces of higher dimensionality

  18. Child center closures: Does nonprofit status provide a comparative advantage?

    Science.gov (United States)

    Lam, Marcus; Klein, Sacha; Freisthler, Bridget; Weiss, Robert E.

    2013-01-01

    Reliable access to dependable, high quality childcare services is a vital concern for large numbers of American families. The childcare industry consists of private nonprofit, private for-profit, and governmental providers that differ along many dimensions, including quality, clientele served, and organizational stability. Nonprofit providers are theorized to provide higher quality services given comparative tax advantages, higher levels of consumer trust, and management by mission driven entrepreneurs. This study examines the influence of ownership structure, defined as nonprofit, for-profit sole proprietors, for-profit companies, and governmental centers, on organizational instability, defined as childcare center closures. Using a cross sectional data set of 15724 childcare licenses in California for 2007, we model the predicted closures of childcare centers as a function of ownership structure as well as center age and capacity. Findings indicate that for small centers (capacity of 30 or less) nonprofits are more likely to close, but for larger centers (capacity 30+) nonprofits are less likely to close. This suggests that the comparative advantages available for nonprofit organizations may be better utilized by larger centers than by small centers. We consider the implications of our findings for parents, practitioners, and social policy. PMID:23543882

  19. Child center closures: Does nonprofit status provide a comparative advantage?

    Science.gov (United States)

    Lam, Marcus; Klein, Sacha; Freisthler, Bridget; Weiss, Robert E

    2013-03-01

    Reliable access to dependable, high quality childcare services is a vital concern for large numbers of American families. The childcare industry consists of private nonprofit, private for-profit, and governmental providers that differ along many dimensions, including quality, clientele served, and organizational stability. Nonprofit providers are theorized to provide higher quality services given comparative tax advantages, higher levels of consumer trust, and management by mission driven entrepreneurs. This study examines the influence of ownership structure, defined as nonprofit, for-profit sole proprietors, for-profit companies, and governmental centers, on organizational instability, defined as childcare center closures. Using a cross sectional data set of 15724 childcare licenses in California for 2007, we model the predicted closures of childcare centers as a function of ownership structure as well as center age and capacity. Findings indicate that for small centers (capacity of 30 or less) nonprofits are more likely to close, but for larger centers (capacity 30+) nonprofits are less likely to close. This suggests that the comparative advantages available for nonprofit organizations may be better utilized by larger centers than by small centers. We consider the implications of our findings for parents, practitioners, and social policy.

  20. Cold-water corals and large hydrozoans provide essential fish habitat for Lappanella fasciata and Benthocometes robustus

    Science.gov (United States)

    Gomes-Pereira, José Nuno; Carmo, Vanda; Catarino, Diana; Jakobsen, Joachim; Alvarez, Helena; Aguilar, Ricardo; Hart, Justin; Giacomello, Eva; Menezes, Gui; Stefanni, Sergio; Colaço, Ana; Morato, Telmo; Santos, Ricardo S.; Tempera, Fernando; Porteiro, Filipe

    2017-11-01

    Many fish species are well-known obligatory inhabitants of shallow-water tropical coral reefs but such associations are difficult to study in deep-water environments. We address the association between two deep-sea fish with low mobility and large sessile invertebrates using a compilation of 20 years of unpublished in situ observations. Data were collected on Northeast Atlantic (NEA) island slopes and seamounts, from the Azores to the Canary Islands, comprising 127 new records of the circalittoral Labridae Lappanella fasciata and 15 of the upper bathyal Ophiididae Benthocometes robustus. Observations by divers, remote operated vehicles (ROV SP, Luso, Victor, Falcon Seaeye), towed vehicles (Greenpeace) and manned submersibles (LULA, Nautile) validated the species association to cold water corals (CWC) and large hydrozoans. L. fasciata occurred from lower infralittoral (41 m) throughout the circalittoral, down to the upper bathyal at 398 m depth. Smaller fishes (fishes (10-15 cm) occurring alone or in smaller groups at greater depths. The labrids favoured areas with large sessile invertebrates (> 10 cm) occurring at habitat and this predator. Gathered evidence renders CWC and hydroid gardens as Essential Fish Habitats for both species, being therefore sensitive to environmental and anthropogenic impacts on these Vulnerable Marine Ecosystems. The Mediterranean distribution of L. fasciata is extended to NEA seamounts and island slopes and the amphi-Atlantic distribution of B. robustus is bridged with molecular data support. Both species are expected to occur throughout the Macaronesia and Mediterranean island slopes and shallow seamounts on habitats with large sessile invertebrates.

  1. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  2. Polynomial selection in number field sieve for integer factorization

    Directory of Open Access Journals (Sweden)

    Gireesh Pandey

    2016-09-01

    Full Text Available The general number field sieve (GNFS is the fastest algorithm for factoring large composite integers which is made up by two prime numbers. Polynomial selection is an important step of GNFS. The asymptotic runtime depends on choice of good polynomial pairs. In this paper, we present polynomial selection algorithm that will be modelled with size and root properties. The correlations between polynomial coefficient and number of relations have been explored with experimental findings.

  3. Apparatus for sensing radiation and providing electrical readout

    International Nuclear Information System (INIS)

    Eichelberger, C.W.; Engeler, W.E.; Tiemann, J.J.

    1975-01-01

    An array of radiation sensing devices each including a pair of closely coupled conductor-insulator-semiconductor cells, one a row line connected cell and the other a column line connected cell, is provided on a common semiconductor substrate connected to ground. Read out of a device is accomplished by reducing the voltage on the row line of the device to cause stored charge to flow to the column connected cell of the device and thereafter reducing the voltage on the column line of the device to inject the charge stored therein into the substrate. Circuit means is provided in series relationship with the addressed column line to integrate the current flow in the column line due to the injected charge. In another embodiment the column conductor lines are arranged in a plurality of consecutively numbered sets, each set including the same number of consecutively numbered column lines. A plurality of charge integrating means are provided each connected between a respective column line of a set and ground for simultaneous read out of charges through the column lines of a set. Switch means are provided for connecting each set of column lines in turn for read out. A plurality of video signals equal in number to the number of sets are obtained. The video signals may be multiplexed to obtain a composite video signal. (auth)

  4. Large field-of-view transmission line resonator for high field MRI

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Johannesson, Kristjan Sundgaard; Boer, Vincent

    2016-01-01

    Transmission line resonators is often a preferable choice for coils in high field magnetic resonance imaging (MRI), because they provide a number of advantages over traditional loop coils. The size of such resonators, however, is limited to shorter than half a wavelength due to high standing wave....... Achieved magnetic field distribution is compared to the conventional transmission line resonator. Imaging experiments are performed using 7 Tesla MRI system. The developed resonator is useful for building coils with large field-of-view....

  5. Distribution of squares modulo a composite number

    OpenAIRE

    Aryan, Farzad

    2015-01-01

    In this paper we study the distribution of squares modulo a square-free number $q$. We also look at inverse questions for the large sieve in the distribution aspect and we make improvements on existing results on the distribution of $s$-tuples of reduced residues.

  6. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  7. The multilevel fast multipole algorithm (MLFMA) for solving large-scale computational electromagnetics problems

    CERN Document Server

    Ergul, Ozgur

    2014-01-01

    The Multilevel Fast Multipole Algorithm (MLFMA) for Solving Large-Scale Computational Electromagnetic Problems provides a detailed and instructional overview of implementing MLFMA. The book: Presents a comprehensive treatment of the MLFMA algorithm, including basic linear algebra concepts, recent developments on the parallel computation, and a number of application examplesCovers solutions of electromagnetic problems involving dielectric objects and perfectly-conducting objectsDiscusses applications including scattering from airborne targets, scattering from red

  8. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  9. Improving timeliness and efficiency in the referral process for safety net providers: application of the Lean Six Sigma methodology.

    Science.gov (United States)

    Deckard, Gloria J; Borkowski, Nancy; Diaz, Deisell; Sanchez, Carlos; Boisette, Serge A

    2010-01-01

    Designated primary care clinics largely serve low-income and uninsured patients who present a disproportionate number of chronic illnesses and face great difficulty in obtaining the medical care they need, particularly the access to specialty physicians. With limited capacity for providing specialty care, these primary care clinics generally refer patients to safety net hospitals' specialty ambulatory care clinics. A large public safety net health system successfully improved the effectiveness and efficiency of the specialty clinic referral process through application of Lean Six Sigma, an advanced process-improvement methodology and set of tools driven by statistics and engineering concepts.

  10. The enigma of number: why children find the meanings of even small number words hard to learn and how we can help them do better.

    Directory of Open Access Journals (Sweden)

    Michael Ramscar

    Full Text Available Although number words are common in everyday speech, learning their meanings is an arduous, drawn-out process for most children, and the source of this delay has long been the subject of inquiry. Children begin by identifying the few small numerosities that can be named without counting, and this has prompted further debate over whether there is a specific, capacity-limited system for representing these small sets, or whether smaller and larger sets are both represented by the same system. Here we present a formal, computational analysis of number learning that offers a possible solution to both puzzles. This analysis indicates that once the environment and the representational demands of the task of learning to identify sets are taken into consideration, a continuous system for learning, representing and discriminating set-sizes can give rise to effective discontinuities in processing. At the same time, our simulations illustrate how typical prenominal linguistic constructions ("there are three balls" structure information in a way that is largely unhelpful for discrimination learning, while suggesting that postnominal constructions ("balls, there are three" will facilitate such learning. A training-experiment with three-year olds confirms these predictions, demonstrating that rapid, significant gains in numerical understanding and competence are possible given appropriately structured postnominal input. Our simulations and results reveal how discrimination learning tunes children's systems for representing small sets, and how its capacity-limits result naturally out of a mixture of the learning environment and the increasingly complex task of discriminating and representing ever-larger number sets. They also explain why children benefit so little from the training that parents and educators usually provide. Given the efficacy of our intervention, the ease with which it can be implemented, and the large body of research showing how early

  11. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  12. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  13. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  14. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  15. Clutch frequency affects the offspring size-number trade-off in lizards.

    Directory of Open Access Journals (Sweden)

    Zheng Wang

    2011-01-01

    Full Text Available Studies of lizards have shown that offspring size cannot be altered by manipulating clutch size in species with a high clutch frequency. This raises a question of whether clutch frequency has a key role in influencing the offspring size-number trade-off in lizards.To test the hypothesis that females reproducing more frequently are less likely to tradeoff offspring size against offspring number, we applied the follicle ablation technique to female Eremias argus (Lacertidae from Handan (HD and Gonghe (GH, the two populations that differ in clutch frequency. Follicle ablation resulted in enlargement of egg size in GH females, but not in HD females. GH females switched from producing a larger number of smaller eggs in the first clutch to a smaller number of larger eggs in the second clutch; HD females showed a similar pattern of seasonal shifts in egg size, but kept clutch size constant between the first two clutches. Thus, the egg size-number trade-off was evident in GH females, but not in HD females.As HD females (mean  = 3.1 clutches per year reproduce more frequently than do GH females (mean  = 1.6 clutches per year, our data therefore validate the hypothesis tested. Our data also provide an inference that maximization of maternal fitness could be achieved in females by diverting a large enough, rather than a higher-than-usual, fraction of the available energy to individual offspring in a given reproductive episode.

  16. Protein Based Molecular Markers Provide Reliable Means to Understand Prokaryotic Phylogeny and Support Darwinian Mode of Evolution

    Directory of Open Access Journals (Sweden)

    Vaibhav eBhandari

    2012-07-01

    Full Text Available The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning whether the Darwinian model of evolution is applicable to the prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs and conserved signature proteins (CSPs for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical

  17. Protein based molecular markers provide reliable means to understand prokaryotic phylogeny and support Darwinian mode of evolution.

    Science.gov (United States)

    Bhandari, Vaibhav; Naushad, Hafiz S; Gupta, Radhey S

    2012-01-01

    The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs) among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning of whether the Darwinian model of evolution is applicable to prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs) and conserved signature proteins (CSPs) for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on the Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs) initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical studies.

  18. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    Science.gov (United States)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available

  19. Tidal Love numbers of neutron and self-bound quark stars

    International Nuclear Information System (INIS)

    Postnikov, Sergey; Prakash, Madappa; Lattimer, James M.

    2010-01-01

    Gravitational waves from the final stages of inspiraling binary neutron stars are expected to be one of the most important sources for ground-based gravitational wave detectors. The masses of the components are determinable from the orbital and chirp frequencies during the early part of the evolution, and large finite-size (tidal) effects are measurable toward the end of inspiral, but the gravitational wave signal is expected to be very complex at this time. Tidal effects during the early part of the evolution will form a very small correction, but during this phase the signal is relatively clean. The accumulated phase shift due to tidal corrections is characterized by a single quantity related to a star's tidal Love number. The Love number is sensitive, in particular, to the compactness parameter M/R and the star's internal structure, and its determination could provide an important constraint to the neutron star radius. We show that Love numbers of self-bound strange quark matter stars are qualitatively different from those of normal neutron stars. Observations of the tidal signature from coalescing compact binaries could therefore provide an important, and possibly unique, way to distinguish self-bound strange quark stars from normal neutron stars. Tidal signatures from self-bound strange quark stars with masses smaller than 1M · are substantially smaller than those of normal stars owing to their smaller radii. Thus tidal signatures of stars less massive than 1M · are probably not detectable with Advanced LIGO. For stars with masses in the range 1-2M · , the anticipated efficiency of the proposed Einstein telescope would be required for the detection of tidal signatures.

  20. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  1. 5th Annual Provider Software Buyer's Guide.

    Science.gov (United States)

    1995-03-01

    To help long term care providers find new ways to improve quality of care and efficiency, PROVIDER presents the fifth annual listing of software firms marketing computer programs for all areas of long term care operations. On the following five pages, more than 70 software firms display their wares, with programs such as minimum data set and care planning, dietary, accounting and financials, case mix, and medication administration records. The guide also charts compatible hardware, integration ability, telephone numbers, company contacts, and easy-to-use reader service numbers.

  2. Turbulent mixing of a slightly supercritical van der Waals fluid at low-Mach number

    International Nuclear Information System (INIS)

    Battista, F.; Casciola, C. M.; Picano, F.

    2014-01-01

    Supercritical fluids near the critical point are characterized by liquid-like densities and gas-like transport properties. These features are purposely exploited in different contexts ranging from natural products extraction/fractionation to aerospace propulsion. Large part of studies concerns this last context, focusing on the dynamics of supercritical fluids at high Mach number where compressibility and thermodynamics strictly interact. Despite the widespread use also at low Mach number, the turbulent mixing properties of slightly supercritical fluids have still not investigated in detail in this regime. This topic is addressed here by dealing with Direct Numerical Simulations of a coaxial jet of a slightly supercritical van der Waals fluid. Since acoustic effects are irrelevant in the low Mach number conditions found in many industrial applications, the numerical model is based on a suitable low-Mach number expansion of the governing equation. According to experimental observations, the weakly supercritical regime is characterized by the formation of finger-like structures – the so-called ligaments – in the shear layers separating the two streams. The mechanism of ligament formation at vanishing Mach number is extracted from the simulations and a detailed statistical characterization is provided. Ligaments always form whenever a high density contrast occurs, independently of real or perfect gas behaviors. The difference between real and perfect gas conditions is found in the ligament small-scale structure. More intense density gradients and thinner interfaces characterize the near critical fluid in comparison with the smoother behavior of the perfect gas. A phenomenological interpretation is here provided on the basis of the real gas thermodynamics properties

  3. Turbulent mixing of a slightly supercritical van der Waals fluid at low-Mach number

    Energy Technology Data Exchange (ETDEWEB)

    Battista, F.; Casciola, C. M. [Department of Mechanical and Aerospace Engineering, Sapienza University, via Eudossiana 18, 00184 Rome (Italy); Picano, F. [Department of Industrial Engineering, University of Padova, via Venezia 1, 35131 Padova (Italy)

    2014-05-15

    Supercritical fluids near the critical point are characterized by liquid-like densities and gas-like transport properties. These features are purposely exploited in different contexts ranging from natural products extraction/fractionation to aerospace propulsion. Large part of studies concerns this last context, focusing on the dynamics of supercritical fluids at high Mach number where compressibility and thermodynamics strictly interact. Despite the widespread use also at low Mach number, the turbulent mixing properties of slightly supercritical fluids have still not investigated in detail in this regime. This topic is addressed here by dealing with Direct Numerical Simulations of a coaxial jet of a slightly supercritical van der Waals fluid. Since acoustic effects are irrelevant in the low Mach number conditions found in many industrial applications, the numerical model is based on a suitable low-Mach number expansion of the governing equation. According to experimental observations, the weakly supercritical regime is characterized by the formation of finger-like structures – the so-called ligaments – in the shear layers separating the two streams. The mechanism of ligament formation at vanishing Mach number is extracted from the simulations and a detailed statistical characterization is provided. Ligaments always form whenever a high density contrast occurs, independently of real or perfect gas behaviors. The difference between real and perfect gas conditions is found in the ligament small-scale structure. More intense density gradients and thinner interfaces characterize the near critical fluid in comparison with the smoother behavior of the perfect gas. A phenomenological interpretation is here provided on the basis of the real gas thermodynamics properties.

  4. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  5. Vegetation-Induced Roughness in Low-Reynold's Number Flows

    Science.gov (United States)

    Piercy, C. D.; Wynn, T. M.

    2008-12-01

    Wetlands are important ecosystems, providing habitat for wildlife and fish and shellfish production, water storage, erosion control, and water quality improvement and preservation. Models to estimate hydraulic resistance due to vegetation in emergent wetlands are crucial to good wetland design and analysis. The goal of this project is to improve modeling of emergent wetlands by linking properties of the vegetation to flow. Existing resistance equations such as Hoffmann (2004), Kadlec (1990), Moghadam and Kouwen (1997), Nepf (1999), and Stone and Shen (2002) were evaluated. A large outdoor vegetated flume was constructed at the Price's Fork Research Center near Blacksburg, Virginia to measure flow and water surface slope through woolgrass (Scirpus cyperinus), a common native emergent wetland plant. Measurements of clump and stem density, diameter, and volume, blockage factor, and stiffness were made after each set of flume runs. Flow rates through the flume were low (3-4 L/s) resulting in very low stem-Reynold's numbers (15-102). Since experimental flow conditions were in the laminar to transitional range, most of the models considered did not predict velocity or stage accurately except for conditions in which the stem-Reynold's number approached 100. At low stem-Reynold's numbers (drag coefficient is inversely proportional to the Reynold's number and can vary greatly with flow conditions. Most of the models considered assumed a stem-Reynold's number in the 100-105 range in which the drag coefficient is relatively constant and as a result did not predict velocity or stage accurately except for conditions in which the stem-Reynold's number approached 100. The only model that accurately predicted stem layer velocity was the Kadlec (1990) model since it does not make assumptions about flow regime; instead, the parameters are adjusted according to the site conditions. Future work includes relating the parameters used to fit the Kadlec (1990) model to measured vegetation

  6. Atomic Fisher information versus atomic number

    International Nuclear Information System (INIS)

    Nagy, A.; Sen, K.D.

    2006-01-01

    It is shown that the Thomas-Fermi Fisher information is negative. A slightly more sophisticated model proposed by Gaspar provides a qualitatively correct expression for the Fisher information: Gaspar's Fisher information is proportional to the two-third power of the atomic number. Accurate numerical calculations show an almost linear dependence on the atomic number

  7. Introduction: Scaling and structure in high Reynolds number wall-bounded flows

    International Nuclear Information System (INIS)

    McKeon, B.J.; Sreenivasan, K.R.

    2007-05-01

    The papers discussed in this report are dealing with the following aspects: Fundamental scaling relations for canonical flows and asymptotic approach to infinite Reynolds numbers; large and very large scales in near-wall turbulences; the influence of roughness and finite Reynolds number effects; comparison between internal and external flows and the universality of the near-wall region; qualitative and quantitative models of the turbulent boundary layer; the neutrally stable atmospheric surface layer as a model for a canonical zero-pressure-gradient boundary layer (author)

  8. The effects of large beach debris on nesting sea turtles

    Science.gov (United States)

    Fujisaki, Ikuko; Lamont, Margaret M.

    2016-01-01

    A field experiment was conducted to understand the effects of large beach debris on sea turtle nesting behavior as well as the effectiveness of large debris removal for habitat restoration. Large natural and anthropogenic debris were removed from one of three sections of a sea turtle nesting beach and distributions of nests and false crawls (non-nesting crawls) in pre- (2011–2012) and post- (2013–2014) removal years in the three sections were compared. The number of nests increased 200% and the number of false crawls increased 55% in the experimental section, whereas a corresponding increase in number of nests and false crawls was not observed in the other two sections where debris removal was not conducted. The proportion of nest and false crawl abundance in all three beach sections was significantly different between pre- and post-removal years. The nesting success, the percent of successful nests in total nesting attempts (number of nests + false crawls), also increased from 24% to 38%; however the magnitude of the increase was comparably small because both the number of nests and false crawls increased, and thus the proportion of the nesting success in the experimental beach in pre- and post-removal years was not significantly different. The substantial increase in sea turtle nesting activities after the removal of large debris indicates that large debris may have an adverse impact on sea turtle nesting behavior. Removal of large debris could be an effective restoration strategy to improve sea turtle nesting.

  9. Conflict Misleads Large Carnivore Management and Conservation: Brown Bears and Wolves in Spain.

    Science.gov (United States)

    Fernández-Gil, Alberto; Naves, Javier; Ordiz, Andrés; Quevedo, Mario; Revilla, Eloy; Delibes, Miguel

    2016-01-01

    Large carnivores inhabiting human-dominated landscapes often interact with people and their properties, leading to conflict scenarios that can mislead carnivore management and, ultimately, jeopardize conservation. In northwest Spain, brown bears Ursus arctos are strictly protected, whereas sympatric wolves Canis lupus are subject to lethal control. We explored ecological, economic and societal components of conflict scenarios involving large carnivores and damages to human properties. We analyzed the relation between complaints of depredations by bears and wolves on beehives and livestock, respectively, and bear and wolf abundance, livestock heads, number of culled wolves, amount of paid compensations, and media coverage. We also evaluated the efficiency of wolf culling to reduce depredations on livestock. Bear damages to beehives correlated positively to the number of female bears with cubs of the year. Complaints of wolf predation on livestock were unrelated to livestock numbers; instead, they correlated positively to the number of wild ungulates harvested during the previous season, the number of wolf packs, and to wolves culled during the previous season. Compensations for wolf complaints were fivefold higher than for bears, but media coverage of wolf damages was thirtyfold higher. Media coverage of wolf damages was unrelated to the actual costs of wolf damages, but the amount of news correlated positively to wolf culling. However, wolf culling was followed by an increase in compensated damages. Our results show that culling of the wolf population failed in its goal of reducing damages, and suggest that management decisions are at least partly mediated by press coverage. We suggest that our results provide insight to similar scenarios, where several species of large carnivores share the landscape with humans, and management may be reactive to perceived conflicts.

  10. Hot-ion Bernstein wave with large kparallel

    International Nuclear Information System (INIS)

    Ignat, D.W.; Ono, M.

    1995-01-01

    The complex roots of the hot plasma dispersion relation in the ion cyclotron range of frequencies have been surveyed. Progressing from low to high values of perpendicular wave number k perpendicular we find first the cold plasma fast wave and then the well-known Bernstein wave, which is characterized by large dispersion, or large changes in k perpendicular for small changes in frequency or magnetic field. At still higher k perpendicular there can be two hot plasma waves with relatively little dispersion. The latter waves exist only for relatively large k parallel, the wave number parallel to the magnetic field, and are strongly damped unless the electron temperature is low compared to the ion temperature. Up to three mode conversions appear to be possible, but two mode conversions are seen consistently

  11. The number of expats is rather stable

    DEFF Research Database (Denmark)

    Andersen, Torben

    2008-01-01

    aggregate data from the Danish economist’s and the engineer’s trade unions show that during the last decade there has been stagnation in the number of expatriates. Taking into consideration that the three trade unions cover the very large majority of Danish knowledge workers occupying foreign jobs...

  12. Radical Software. Number Two. The Electromagnetic Spectrum.

    Science.gov (United States)

    Korot, Beryl, Ed.; Gershuny, Phyllis, Ed.

    1970-01-01

    In an effort to foster the innovative uses of television technology, this tabloid format periodical details social, educational, and artistic experiments with television and lists a large number of experimental videotapes available from various television-centered groups and individuals. The principal areas explored in this issue include cable…

  13. A novel internet-based geriatric education program for emergency medical services providers.

    Science.gov (United States)

    Shah, Manish N; Swanson, Peter A; Nobay, Flavia; Peterson, Lars-Kristofer N; Caprio, Thomas V; Karuza, Jurgis

    2012-09-01

    Despite caring for large numbers of older adults, prehospital emergency medical services (EMS) providers receive minimal geriatrics-specific training while obtaining their certification. Studies have shown that they desire further training to improve their comfort level and knowledge in caring for older adults, but continuing education programs to address these needs must account for each EMS provider's specific needs, consider each provider's learning styles, and provide an engaging, interactive experience. A novel, Internet-based, video podcast-based geriatric continuing education program was developed and implemented for EMS providers, and their perceived value of the program was evaluated. They found this resource to be highly valuable and were strongly supportive of the modality and the specific training provided. Some reported technical challenges and the inability to engage in a discussion to clarify topics as barriers. It was felt that both of these barriers could be addressed through programmatic and technological revisions. This study demonstrates the proof of concept of video podcast training to address deficiencies in EMS education regarding the care of older adults, although further work is needed to demonstrate the educational effect of video podcasts on the knowledge and skills of trainees. © 2012, Copyright the Authors Journal compilation © 2012, The American Geriatrics Society.

  14. A spatio-temporally compensated acousto-optic scanner for two-photon microscopy providing large field of view.

    Science.gov (United States)

    Kremer, Y; Léger, J-F; Lapole, R; Honnorat, N; Candela, Y; Dieudonné, S; Bourdieu, L

    2008-07-07

    Acousto-optic deflectors (AOD) are promising ultrafast scanners for non-linear microscopy. Their use has been limited until now by their small scanning range and by the spatial and temporal dispersions of the laser beam going through the deflectors. We show that the use of AOD of large aperture (13mm) compared to standard deflectors allows accessing much larger field of view while minimizing spatio-temporal distortions. An acousto-optic modulator (AOM) placed at distance of the AOD is used to compensate spatial and temporal dispersions. Fine tuning of the AOM-AOD setup using a frequency-resolved optical gating (GRENOUILLE) allows elimination of pulse front tilt whereas spatial chirp is minimized thanks to the large aperture AOD.

  15. On the binary expansions of algebraic numbers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Pomerance, Carl

    2003-07-01

    Employing concepts from additive number theory, together with results on binary evaluations and partial series, we establish bounds on the density of 1's in the binary expansions of real algebraic numbers. A central result is that if a real y has algebraic degree D > 1, then the number {number_sign}(|y|, N) of 1-bits in the expansion of |y| through bit position N satisfies {number_sign}(|y|, N) > CN{sup 1/D} for a positive number C (depending on y) and sufficiently large N. This in itself establishes the transcendency of a class of reals {summation}{sub n{ge}0} 1/2{sup f(n)} where the integer-valued function f grows sufficiently fast; say, faster than any fixed power of n. By these methods we re-establish the transcendency of the Kempner--Mahler number {summation}{sub n{ge}0}1/2{sup 2{sup n}}, yet we can also handle numbers with a substantially denser occurrence of 1's. Though the number z = {summation}{sub n{ge}0}1/2{sup n{sup 2}} has too high a 1's density for application of our central result, we are able to invoke some rather intricate number-theoretical analysis and extended computations to reveal aspects of the binary structure of z{sup 2}.

  16. Interaction of Number Magnitude and Auditory Localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan

    2016-01-01

    The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.

  17. How small firms contrast with large firms regarding perceptions, practices, and needs in the U.S

    Science.gov (United States)

    Urs Buehlmann; Matthew Bumgardner; Michael. Sperber

    2013-01-01

    As many larger secondary woodworking firms have moved production offshore and been adversely impacted by the recent housing downturn, smaller firms have become important to driving U.S. hardwood demand. This study compared and contrasted small and large firms on a number of factors to help determine the unique characteristics of small firms and to provide insights into...

  18. GRIP LANGLEY AEROSOL RESEARCH GROUP EXPERIMENT (LARGE) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Langley Aerosol Research Group Experiment (LARGE) measures ultrafine aerosol number density, total and non-volatile aerosol number density, dry aerosol size...

  19. Number-conserving random phase approximation with analytically integrated matrix elements

    International Nuclear Information System (INIS)

    Kyotoku, M.; Schmid, K.W.; Gruemmer, F.; Faessler, A.

    1990-01-01

    In the present paper a number conserving random phase approximation is derived as a special case of the recently developed random phase approximation in general symmetry projected quasiparticle mean fields. All the occurring integrals induced by the number projection are performed analytically after writing the various overlap and energy matrices in the random phase approximation equation as polynomials in the gauge angle. In the limit of a large number of particles the well-known pairing vibration matrix elements are recovered. We also present a new analytically number projected variational equation for the number conserving pairing problem

  20. Large-Area Neutron Detector based on Li-6 Pulse Mode Ionization Chamber

    International Nuclear Information System (INIS)

    Chung, K.; Ianakiev, K.D.; Swinhoe, M.T.; Makela, M.F.

    2005-01-01

    Prototypes of a Li-6 Pulse Mode Ionization Chamber (LiPMIC) have been in development for the past two years for the purpose of providing large-area neutron detector. this system would be suitable for remote deployment for homeland security and counterterrorism needs at borders, ports, and nuclear facilities. A prototype of LiPMIC is expected to provide a similar level of performance to the current industry-standard, He-3 proportional counters, while keeping the initial cost of procurement down by an order of magnitude, especially where large numbers of detectors are required. The overall design aspect and the efficiency optimization process is discussed. Specifically, the MCNP simulations of a single-cell prototype were performed and benchmarked with the experimental results. MCNP simulations of a three dimensional array design show intrinsic efficiency comparable to that of an array of He-3 proportional counters. LiPMIC has shown steady progress toward fulfilling the design expectations and future design modification and optimization are discussed.

  1. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    Science.gov (United States)

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  2. Gaming the Law of Large Numbers

    Science.gov (United States)

    Hoffman, Thomas R.; Snapp, Bart

    2012-01-01

    Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…

  3. Healthcare quality management in Switzerland--a survey among providers.

    Science.gov (United States)

    Kaderli, Reto; Pfortmueller, Carmen A; Businger, Adrian P

    2012-04-27

    In the last decade assessing the quality of healthcare has become increasingly important across the world. Switzerland lacks a detailed overview of how quality management is implemented and of its effects on medical procedures and patients' concerns. This study aimed to examine the systematics of quality management in Switzerland by assessing the providers and collected parameters of current quality initiatives. In summer 2011 we contacted all of the medical societies in Switzerland, the Federal Office of Public Health, the Swiss Medical Association (FMH) and the head of Swiss medical insurance providers, to obtain detailed information on current quality initiatives. All quality initiatives featuring standardised parameter assessment were included. Of the current 45 initiatives, 19 were powered by medical societies, five by hospitals, 11 by non-medical societies, two by the government, two by insurance companies or related institutions and six by unspecified institutions. In all, 24 medical registers, five seals of quality, five circles of quality, two self-assessment tools, seven superior entities, one checklist and one combined project existed. The cost of treatment was evaluated by four initiatives. A data report was released by 24 quality initiatives. The wide variety and the large number of 45 recorded quality initiatives provides a promising basis for effective healthcare quality management in Switzerland. However, an independent national supervisory authority should be appointed to provide an effective review of all quality initiatives and their transparency and coordination.

  4. Prospecting direction and favourable target areas for exploration of large and super-large uranium deposits in China

    International Nuclear Information System (INIS)

    Liu Xingzhong

    1993-01-01

    A host of large uranium deposits have been successively discovered abroad by means of geological exploration, metallogenetic model studies and the application of new geophysical and geochemical methods since 1970's. Thorough undertaking geological research relevant to prospecting for super large uranium deposits have attracted great attention of the worldwide geological circle. The important task for the vast numbers of uranium geological workers is to make an afford to discover more numerous large and super large uranium deposits in China. The author comprehensively analyses the regional geological setting and geological metallogenetic conditions for the super large uranium deposits in the world. Comparative studies have been undertaken and the prospecting direction and favourable target areas for the exploration of super large uranium deposits in China have been proposed

  5. Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.

    Science.gov (United States)

    Tong, Liping; Thompson, Elizabeth

    2008-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel

  6. Random numbers spring from alpha decay

    International Nuclear Information System (INIS)

    Frigerio, N.A.; Sanathanan, L.P.; Morley, M.; Clark, N.A.; Tyler, S.A.

    1980-05-01

    Congruential random number generators, which are widely used in Monte Carlo simulations, are deficient in that the number they generate are concentrated in a relatively small number of hyperplanes. While this deficiency may not be a limitation in small Monte Carlo studies involving a few variables, it introduces a significant bias in large simulations requiring high resolution. This bias was recognized and assessed during preparations for an accident analysis study of nuclear power plants. This report describes a random number device based on the radioactive decay of alpha particles from a 235 U source in a high-resolution gas proportional counter. The signals were fed to a 4096-channel analyzer and for each channel the frequency of signals registered in a 20,000-microsecond interval was recorded. The parity bits of these frequency counts (0 for an even count and 1 for an odd count) were then assembled in sequence to form 31-bit binary random numbers and transcribed to a magnetic tape. This cycle was repeated as many times as were necessary to create 3 million random numbers. The frequency distribution of counts from the present device conforms to the Brockwell-Moyal distribution, which takes into account the dead time of the counter (both the dead time and decay constant of the underlying Poisson process were estimated). Analysis of the count data and tests of randomness on a sample set of the 31-bit binary numbers indicate that this random number device is a highly reliable source of truly random numbers. Its use is, therefore, recommended in Monte Carlo simulations for which the congruential pseudorandom number generators are found to be inadequate. 6 figures, 5 tables

  7. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  8. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    Science.gov (United States)

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  9. 78 FR 26244 - Updating of Employer Identification Numbers

    Science.gov (United States)

    2013-05-06

    ... Number, or EIN. Employers are required to know the identity of their responsible party. The amount of...-BK02 Updating of Employer Identification Numbers AGENCY: Internal Revenue Service (IRS), Treasury... assigned an employer identification number (EIN) to provide updated information to the IRS in the manner...

  10. Topological quantum numbers in nonrelativistic physics

    CERN Document Server

    Thouless, David James

    1998-01-01

    Topological quantum numbers are distinguished from quantum numbers based on symmetry because they are insensitive to the imperfections of the systems in which they are observed. They have become very important in precision measurements in recent years, and provide the best measurements of voltage and electrical resistance. This book describes the theory of such quantum numbers, starting with Dirac's argument for the quantization of electric charge, and continuing with discussions on the helium superfluids, flux quantization and the Josephson effect in superconductors, the quantum Hall effect,

  11. Predicting number of hospitalization days based on health insurance claims data using bagged regression trees.

    Science.gov (United States)

    Xie, Yang; Schreier, Günter; Chang, David C W; Neubauer, Sandra; Redmond, Stephen J; Lovell, Nigel H

    2014-01-01

    Healthcare administrators worldwide are striving to both lower the cost of care whilst improving the quality of care given. Therefore, better clinical and administrative decision making is needed to improve these issues. Anticipating outcomes such as number of hospitalization days could contribute to addressing this problem. In this paper, a method was developed, using large-scale health insurance claims data, to predict the number of hospitalization days in a population. We utilized a regression decision tree algorithm, along with insurance claim data from 300,000 individuals over three years, to provide predictions of number of days in hospital in the third year, based on medical admissions and claims data from the first two years. Our method performs well in the general population. For the population aged 65 years and over, the predictive model significantly improves predictions over a baseline method (predicting a constant number of days for each patient), and achieved a specificity of 70.20% and sensitivity of 75.69% in classifying these subjects into two categories of 'no hospitalization' and 'at least one day in hospital'.

  12. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  13. Large Eddy Simulation (LES for IC Engine Flows

    Directory of Open Access Journals (Sweden)

    Kuo Tang-Wei

    2013-10-01

    Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.

  14. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey

  15. Measuring happiness in large population

    Science.gov (United States)

    Wenas, Annabelle; Sjahputri, Smita; Takwin, Bagus; Primaldhi, Alfindra; Muhamad, Roby

    2016-01-01

    The ability to know emotional states for large number of people is important, for example, to ensure the effectiveness of public policies. In this study, we propose a measure of happiness that can be used in large scale population that is based on the analysis of Indonesian language lexicons. Here, we incorporate human assessment of Indonesian words, then quantify happiness on large-scale of texts gathered from twitter conversations. We used two psychological constructs to measure happiness: valence and arousal. We found that Indonesian words have tendency towards positive emotions. We also identified several happiness patterns during days of the week, hours of the day, and selected conversation topics.

  16. Gauge theory for baryon and lepton numbers with leptoquarks.

    Science.gov (United States)

    Duerr, Michael; Fileviez Pérez, Pavel; Wise, Mark B

    2013-06-07

    Models where the baryon (B) and lepton (L) numbers are local gauge symmetries that are spontaneously broken at a low scale are revisited. We find new extensions of the standard model which predict the existence of fermions that carry both baryon and lepton numbers (i.e., leptoquarks). The local baryonic and leptonic symmetries can be broken at a scale close to the electroweak scale and we do not need to postulate the existence of a large desert to satisfy the experimental constraints on baryon number violating processes like proton decay.

  17. Number theory in the spirit of Ramanujan

    CERN Document Server

    Berndt, Bruce C

    2006-01-01

    Ramanujan is recognized as one of the great number theorists of the twentieth century. Here now is the first book to provide an introduction to his work in number theory. Most of Ramanujan's work in number theory arose out of q-series and theta functions. This book provides an introduction to these two important subjects and to some of the topics in number theory that are inextricably intertwined with them, including the theory of partitions, sums of squares and triangular numbers, and the Ramanujan tau function. The majority of the results discussed here are originally due to Ramanujan or were rediscovered by him. Ramanujan did not leave us proofs of the thousands of theorems he recorded in his notebooks, and so it cannot be claimed that many of the proofs given in this book are those found by Ramanujan. However, they are all in the spirit of his mathematics. The subjects examined in this book have a rich history dating back to Euler and Jacobi, and they continue to be focal points of contemporary mathematic...

  18. Number Sense on the Number Line

    Science.gov (United States)

    Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni

    2018-01-01

    A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…

  19. Large-eddy simulation of atmospheric flow over complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bechmann, A.

    2006-11-15

    The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - epsilon model, which has been implemented on a finite-volume code of the incompressible Reynolds-averaged Navier-Stokes equations (RANS). The k - epsilon model is traditionally used for RANS computations, but is here developed to also enable LES. LES is able to provide detailed descriptions of a wide range of engineering flows at low Reynolds numbers. For atmospheric flows, however, the high Reynolds numbers and the rough surface of the earth provide difficulties normally not compatible with LES. Since these issues are most severe near the surface they are addressed by handling the near surface region with RANS and only use LES above this region. Using this method, the developed turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic and homogeneous turbulence. Three atmospheric test cases are investigated in order to validate the behavior of the presented turbulence model. Simulation of the neutral atmospheric boundary layer, illustrates the turbulence model ability to generate and maintain the turbulent structures responsible for boundary layer transport processes. Velocity and turbulence profiles are in good agreement with measurements. Simulation of the flow over the Askervein hill is also

  20. Maps on large-scale air quality concentrations in the Netherlands. Report on 2008

    International Nuclear Information System (INIS)

    Velders, G.J.M.; Aben, J.M.M.; Blom, W.F.; Van Dam, J.D.; Elzenga, H.E.; Geilenkirchen, G.P.; Hammingh, P.; Hoen, A.; Jimmink, B.A.; Koelemeijer, R.B.A.; Matthijsen, J.; Peek, C.J.; Schilderman, C.B.W.; Van der Sluis, O.C.; De Vries, W.J.

    2008-01-01

    Decrease expected in the number of locations exceeding the air quality limit values In the Netherlands, the number of locations were the European limit values for particulate matter and nitrogen dioxide will be exceeded is expected to decrease by 70-90%, in the period up to 2011, respectively 2015. The limit value for particulate matter from 2011 onwards, and for nitrogen dioxide from 2015 onwards, is expected to be exceeded at a small number of locations in the Netherlands, based on standing and proposed Dutch and European policies. These locations are situated mainly in the Randstad, Netherlands, in the vicinity of motorway around the large cities and in the busiest streets in large cities. Whether the limit values will actually be exceeded depends also on local policies and meteorological fluctuations. This estimate is based on large-scale concentration maps (called GCN maps) of air quality components and on additional local contributions. The concentration maps provide the best possible estimate of large-scale air quality. The degree of uncertainty about the local concentrations of particulate matter and nitrogen dioxide is estimated to be approximately 20%. This report presents the methods used to produce the GCN maps and the included emissions. It also shows the differences with respect to the maps of 2007. These maps are used by local, provincial and other authorities. MNP emphasises to keep the uncertainties in the concentrations in mind when using these maps for planning, or when comparing concentrations with limit values. This also applies to the selecting of local measures to improve the air quality. The concentration maps are available online, at http://www.mnp.nl/gcn.html [nl