WorldWideScience

Sample records for comparing large numbers

  1. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  2. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  3. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  4. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  5. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  6. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  7. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  8. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  9. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  10. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  11. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  12. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  13. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  14. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  15. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  16. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  17. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  18. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  19. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  20. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  1. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  2. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  3. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  4. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  5. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  6. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  7. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  8. Comparing Data Sets: Implicit Summaries of the Statistical Properties of Number Sets

    Science.gov (United States)

    Morris, Bradley J.; Masnick, Amy M.

    2015-01-01

    Comparing datasets, that is, sets of numbers in context, is a critical skill in higher order cognition. Although much is known about how people compare single numbers, little is known about how number sets are represented and compared. We investigated how subjects compared datasets that varied in their statistical properties, including ratio of…

  9. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  10. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  11. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  12. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  13. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  14. Comparing spatial regression to random forests for large ...

    Science.gov (United States)

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  15. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  16. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  17. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  18. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  19. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  20. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  1. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  2. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  3. Comparative analysis of the number of sheep in FYR and some European countries

    Directory of Open Access Journals (Sweden)

    Arsić Slavica

    2015-01-01

    Full Text Available Sheep farming in Serbia, from year to year, notices a descending course in number of sheep, as well as in production of milk and meat. The main objective of this paper is the analysis of the number of sheep in Serbia and the surrounding countries (FYR. By comparing the current state of the total number of sheep (in 2011 with the state in the former Yugoslavia, the result shown is that there are 66% less sheep in Serbia compared to the total number seen in 1967 (base year. Compared to the last census from 2012, there is an increased number of sheep in Serbia, compared to previous year (2011 by 18.4%. Other former Yugoslav republics (FYR also have a decrease in the total number of sheep: in Bosnia and Herzegovina by 76.5%, in Montenegro by 64.3%, in Croatia by 41.3%, in Macedonia by 63.5% compared to 1967 (base year, except for Slovenia, which has an increase in the total number of sheep by 83,000 head of cattle. In paper is given overview of the number of sheep for some European countries and for some part of world, in purpose of comparison with sheep state in FYR.

  4. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  5. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  6. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  7. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  8. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  9. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  10. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  11. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  12. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  13. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  14. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  15. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  16. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  17. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  18. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  19. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  20. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  1. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  2. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  3. A High-Throughput Computational Framework for Identifying Significant Copy Number Aberrations from Array Comparative Genomic Hybridisation Data

    Directory of Open Access Journals (Sweden)

    Ian Roberts

    2012-01-01

    Full Text Available Reliable identification of copy number aberrations (CNA from comparative genomic hybridization data would be improved by the availability of a generalised method for processing large datasets. To this end, we developed swatCGH, a data analysis framework and region detection heuristic for computational grids. swatCGH analyses sequentially displaced (sliding windows of neighbouring probes and applies adaptive thresholds of varying stringency to identify the 10% of each chromosome that contains the most frequently occurring CNAs. We used the method to analyse a published dataset, comparing data preprocessed using four different DNA segmentation algorithms, and two methods for prioritising the detected CNAs. The consolidated list of the most commonly detected aberrations confirmed the value of swatCGH as a simplified high-throughput method for identifying biologically significant CNA regions of interest.

  4. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  5. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  6. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  7. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  8. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  9. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  10. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  11. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  12. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  13. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  14. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  15. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  16. A framework for comparative evaluation of dosimetric methods to triage a large population following a radiological event

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Nicolalde, Roberto J.; Demidenko, Eugene; Williams, Benjamin B.; Shapiro, Alla; Wiley, Albert L.; Swartz, Harold M.

    2011-01-01

    Background: To prepare for a possible major radiation disaster involving large numbers of potentially exposed people, it is important to be able to rapidly and accurately triage people for treatment or not, factoring in the likely conditions and available resources. To date, planners have had to create guidelines for triage based on methods for estimating dose that are clinically available and which use evidence extrapolated from unrelated conditions. Current guidelines consequently focus on measuring clinical symptoms (e.g., time-to-vomiting), which may not be subject to the same verification of standard methods and validation processes required for governmental approval processes of new and modified procedures. Biodosimeters under development have not yet been formally approved for this use. Neither set of methods has been tested in settings involving large-scale populations at risk for exposure. Objective: To propose a framework for comparative evaluation of methods for such triage and to evaluate biodosimetric methods that are currently recommended and new methods as they are developed. Methods: We adapt the NIH model of scientific evaluations and sciences needed for effective translational research to apply to biodosimetry for triaging very large populations following a radiation event. We detail criteria for translating basic science about dosimetry into effective multi-stage triage of large populations and illustrate it by analyzing 3 current guidelines and 3 advanced methods for biodosimetry. Conclusions: This framework for evaluating dosimetry in large populations is a useful technique to compare the strengths and weaknesses of different dosimetry methods. It can help policy-makers and planners not only to compare the methods' strengths and weaknesses for their intended use but also to develop an integrated approach to maximize their effectiveness. It also reveals weaknesses in methods that would benefit from further research and evaluation.

  17. A framework for comparative evaluation of dosimetric methods to triage a large population following a radiological event

    Energy Technology Data Exchange (ETDEWEB)

    Flood, Ann Barry, E-mail: Ann.B.Flood@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Nicolalde, Roberto J., E-mail: Roberto.J.Nicolalde@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Demidenko, Eugene, E-mail: Eugene.Demidenko@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Williams, Benjamin B., E-mail: Benjamin.B.Williams@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Shapiro, Alla, E-mail: Alla.Shapiro@fda.hhs.gov [Food and Drug Administration (FDA), Rockville, MD (United States); Wiley, Albert L., E-mail: Albert.Wiley@orise.orau.gov [Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN (United States); Swartz, Harold M., E-mail: Harold.M.Swartz@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States)

    2011-09-15

    Background: To prepare for a possible major radiation disaster involving large numbers of potentially exposed people, it is important to be able to rapidly and accurately triage people for treatment or not, factoring in the likely conditions and available resources. To date, planners have had to create guidelines for triage based on methods for estimating dose that are clinically available and which use evidence extrapolated from unrelated conditions. Current guidelines consequently focus on measuring clinical symptoms (e.g., time-to-vomiting), which may not be subject to the same verification of standard methods and validation processes required for governmental approval processes of new and modified procedures. Biodosimeters under development have not yet been formally approved for this use. Neither set of methods has been tested in settings involving large-scale populations at risk for exposure. Objective: To propose a framework for comparative evaluation of methods for such triage and to evaluate biodosimetric methods that are currently recommended and new methods as they are developed. Methods: We adapt the NIH model of scientific evaluations and sciences needed for effective translational research to apply to biodosimetry for triaging very large populations following a radiation event. We detail criteria for translating basic science about dosimetry into effective multi-stage triage of large populations and illustrate it by analyzing 3 current guidelines and 3 advanced methods for biodosimetry. Conclusions: This framework for evaluating dosimetry in large populations is a useful technique to compare the strengths and weaknesses of different dosimetry methods. It can help policy-makers and planners not only to compare the methods' strengths and weaknesses for their intended use but also to develop an integrated approach to maximize their effectiveness. It also reveals weaknesses in methods that would benefit from further research and evaluation.

  18. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  19. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  20. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  1. Challenges and opportunities in coding the commons: problems, procedures, and potential solutions in large-N comparative case studies

    Directory of Open Access Journals (Sweden)

    Elicia Ratajczyk

    2016-09-01

    Full Text Available On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our understanding of such complex systems. However, the data at our disposal are often not easily comparable, have limited scope and scale, and are based on disparate underlying frameworks inhibiting synthesis, meta-analysis, and the validation of findings. Research efforts are further hampered when case inclusion criteria, variable definitions, coding schema, and inter-coder reliability testing are not made explicit in the presentation of research and shared among the research community. This paper first outlines challenges experienced by researchers engaged in a large-scale coding project; then highlights valuable lessons learned; and finally discusses opportunities for further research on comparative case study analysis focusing on social-ecological systems and common pool resources.

  2. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  3. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  4. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  5. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.

  6. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    International Nuclear Information System (INIS)

    Chiu, J; Ma, L

    2015-01-01

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume

  7. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  8. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  9. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  10. Comparing large covariance matrices under weak conditions on the dependence structure and its application to gene clustering.

    Science.gov (United States)

    Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan

    2017-03-01

    Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.

  11. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    Science.gov (United States)

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  12. Effective atomic numbers of some tissue substitutes by different methods: A comparative study

    Directory of Open Access Journals (Sweden)

    Vishwanath P Singh

    2014-01-01

    Full Text Available Effective atomic numbers of some human organ tissue substitutes such as polyethylene terephthalate, red articulation wax, paraffin 1, paraffin 2, bolus, pitch, polyphenylene sulfide, polysulfone, polyvinylchloride, and modeling clay have been calculated by four different methods like Auto-Z eff, direct, interpolation, and power law. It was found that the effective atomic numbers computed by Auto-Z eff , direct and interpolation methods were in good agreement for intermediate energy region (0.1 MeV < E < 5 MeV where the Compton interaction dominates. A large difference in effective atomic numbers by direct method and Auto-Z eff was observed in photo-electric and pair-production regions. Effective atomic numbers computed by power law were found to be close to direct method in photo-electric absorption region. The Auto-Z eff , direct and interpolation methods were found to be in good agreement for computation of effective atomic numbers in intermediate energy region (100 keV < E < 10 MeV. The direct method was found to be appropriate method for computation of effective atomic numbers in photo-electric region (10 keV < E < 100 keV. The tissue equivalence of the tissue substitutes is possible to represent by any method for computation of effective atomic number mentioned in the present study. An accurate estimation of Rayleigh scattering is required to eliminate effect of molecular, chemical, or crystalline environment of the atom for estimation of gamma interaction parameters.

  13. Marketing Library and Information Services: Comparing Experiences at Large Institutions.

    Science.gov (United States)

    Noel, Robert; Waugh, Timothy

    This paper explores some of the similarities and differences between publicizing information services within the academic and corporate environments, comparing the marketing experiences of Abbot Laboratories (Illinois) and Indiana University. It shows some innovative online marketing tools, including an animated gif model of a large, integrated…

  14. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  15. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  16. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  17. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  18. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  19. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  20. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Comparative guide to emerging diagnostic tools for large commercial HVAC systems

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, Hannah; Piette, Mary Ann

    2001-05-01

    This guide compares emerging diagnostic software tools that aid detection and diagnosis of operational problems for large HVAC systems. We have evaluated six tools for use with energy management control system (EMCS) or other monitoring data. The diagnostic tools summarize relevant performance metrics, display plots for manual analysis, and perform automated diagnostic procedures. Our comparative analysis presents nine summary tables with supporting explanatory text and includes sample diagnostic screens for each tool.

  2. Comparative analyses of the neuron numbers and volumes of the amygdaloid complex in old and new world primates.

    Science.gov (United States)

    Carlo, C N; Stefanacci, L; Semendeferi, K; Stevens, C F

    2010-04-15

    The amygdaloid complex (AC), a key component of the limbic system, is a brain region critical for the detection and interpretation of emotionally salient information. Therefore, changes in its structure and function are likely to provide correlates of mood and emotion disorders, diseases that afflict a large portion of the human population. Previous gross comparisons of the AC in control and diseased individuals have, however, mainly failed to discover these expected correlations with diseases. We have characterized AC nuclei in different nonhuman primate species to establish a baseline for more refined comparisons between the normal and the diseased amygdala. AC nuclei volume and neuron number in 19 subdivisions are reported from 13 Old and New World primate brains, spanning five primate species, and compared with corresponding data from humans. Analysis of the four largest AC nuclei revealed that volume and neuron number of one component, the central nucleus, has a negative allometric relationship with total amygdala volume and neuron number, which is in contrast with the isometric relationship found in the other AC nuclei (for both neuron number and volume). Neuron density decreases across all four nuclei according to a single power law with an exponent of about minus one-half. Because we have included quantitative comparisons with great apes and humans, our conclusions apply to human brains, and our scaling laws can potentially be used to study the anatomical correlates of the amygdala in disorders involving pathological emotion processing. (c) 2009 Wiley-Liss, Inc.

  3. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  4. Comparative analysis on the selection of number of clusters in community detection

    Science.gov (United States)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2018-02-01

    We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

  5. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  6. A Comparative Study of Four Methods for the Detection of Nematode Eggs and Large Protozoan Cysts in Mandrill Faecal Material.

    Science.gov (United States)

    Pouillevet, Hanae; Dibakou, Serge-Ely; Ngoubangoye, Barthélémy; Poirotte, Clémence; Charpentier, Marie J E

    2017-01-01

    Coproscopical methods like sedimentation and flotation techniques are widely used in the field for studying simian gastrointestinal parasites. Four parasites of known zoonotic potential were studied in a free-ranging, non-provisioned population of mandrills (Mandrillus sphinx): 2 nematodes (Necatoramericanus/Oesophagostomum sp. complex and Strongyloides sp.) and 2 protozoan species (Balantidium coli and Entamoeba coli). Different coproscopical techniques are available but they are rarely compared to evaluate their efficiency to retrieve parasites. In this study 4 different field-friendly methods were compared. A sedimentation method and 3 different McMaster methods (using sugar, salt, and zinc sulphate solutions) were performed on 47 faecal samples collected from different individuals of both sexes and all ages. First, we show that McMaster flotation methods are appropriate to detect and thus quantify large protozoan cysts. Second, zinc sulphate McMaster flotation allows the retrieval of a higher number of parasite taxa compared to the other 3 methods. This method further shows the highest probability to detect each of the studied parasite taxa. Altogether our results show that zinc sulphate McMaster flotation appears to be the best technique to use when studying nematodes and large protozoa. © 2017 S. Karger AG, Basel.

  7. Transitional boundary layer in low-Prandtl-number convection at high Rayleigh number

    Science.gov (United States)

    Schumacher, Joerg; Bandaru, Vinodh; Pandey, Ambrish; Scheel, Janet

    2016-11-01

    The boundary layer structure of the velocity and temperature fields in turbulent Rayleigh-Bénard flows in closed cylindrical cells of unit aspect ratio is revisited from a transitional and turbulent viscous boundary layer perspective. When the Rayleigh number is large enough the boundary layer dynamics at the bottom and top plates can be separated into an impact region of downwelling plumes, an ejection region of upwelling plumes and an interior region (away from side walls) that is dominated by a shear flow of varying orientation. This interior plate region is compared here to classical wall-bounded shear flows. The working fluid is liquid mercury or liquid gallium at a Prandtl number of Pr = 0 . 021 for a range of Rayleigh numbers of 3 ×105 Deutsche Forschungsgemeinschaft.

  8. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  9. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  10. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  11. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    Directory of Open Access Journals (Sweden)

    Débora Jardim-Messeder

    2017-12-01

    Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal

  12. The natural number bias and its role in rational number understanding in children with dyscalculia. Delay or deficit?

    Science.gov (United States)

    Van Hoof, Jo; Verschaffel, Lieven; Ghesquière, Pol; Van Dooren, Wim

    2017-12-01

    Previous research indicated that in several cases learners' errors on rational number tasks can be attributed to learners' tendency to (wrongly) apply natural number properties. There exists a large body of literature both on learners' struggle with understanding the rational number system and on the role of the natural number bias in this struggle. However, little is known about this phenomenon in learners with dyscalculia. We investigated the rational number understanding of learners with dyscalculia and compared it with the rational number understanding of learners without dyscalculia. Three groups of learners were included: sixth graders with dyscalculia, a chronological age match group, and an ability match group. The results showed that the rational number understanding of learners with dyscalculia is significantly lower than that of typically developing peers, but not significantly different from younger learners, even after statistically controlling for mathematics achievement. Next to a delay in their mathematics achievement, learners with dyscalculia seem to have an extra delay in their rational number understanding, compared with peers. This is especially the case in those rational number tasks where one has to inhibit natural number knowledge to come to the right answer. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  14. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Manohara, S. R.; Hanagodimath, S. M.; Gerward, L. [Department of Physics, Gulbarga University, Gulbarga, Karnataka 585 106 (India); Department of Physics, Technical University of Denmark, Lyngby DK-2800 (Denmark)

    2009-01-15

    The effective atomic numbers Z{sub eff} of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z{sub eff} is given a new meaning by using a modern database of photon interaction cross sections (WinXCom). The results of the two methods are compared and discussed. It is shown that for all biomolecules the direct method gives larger values of Z{sub eff} than the interpolation method, in particular at low energies (1-100 keV) At medium energies (0.1-5 MeV), Z{sub eff} for both methods is about constant and equal to the mean atomic number of the material. Wherever possible, the calculated values of Z{sub eff} are compared with experimental data.

  15. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    International Nuclear Information System (INIS)

    Manohara, S. R.; Hanagodimath, S. M.; Gerward, L.

    2009-01-01

    The effective atomic numbers Z eff of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z eff is given a new meaning by using a modern database of photon interaction cross sections (WinXCom). The results of the two methods are compared and discussed. It is shown that for all biomolecules the direct method gives larger values of Z eff than the interpolation method, in particular at low energies (1-100 keV) At medium energies (0.1-5 MeV), Z eff for both methods is about constant and equal to the mean atomic number of the material. Wherever possible, the calculated values of Z eff are compared with experimental data.

  16. Comparative guide to emerging diagnostic tools for large commercial HVAC systems; TOPICAL

    International Nuclear Information System (INIS)

    Friedman, Hannah; Piette, Mary Ann

    2001-01-01

    This guide compares emerging diagnostic software tools that aid detection and diagnosis of operational problems for large HVAC systems. We have evaluated six tools for use with energy management control system (EMCS) or other monitoring data. The diagnostic tools summarize relevant performance metrics, display plots for manual analysis, and perform automated diagnostic procedures. Our comparative analysis presents nine summary tables with supporting explanatory text and includes sample diagnostic screens for each tool

  17. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  18. Simulation study comparing the helmet-chin PET with a cylindrical PET of the same number of detectors

    Science.gov (United States)

    Ahmed, Abdella M.; Tashima, Hideaki; Yoshida, Eiji; Nishikido, Fumihiko; Yamaya, Taiga

    2017-06-01

    There is a growing interest in developing brain PET scanners with high sensitivity and high spatial resolution for early diagnosis of neurodegenerative diseases and studies of brain functions. Sensitivity of the PET scanner can be improved by increasing the solid angle. However, conventional PET scanners are designed based on a cylindrical geometry, which may not be the most efficient design for brain imaging in terms of the balance between sensitivity and cost. We proposed a dedicated brain PET scanner based on a hemispheric shape detector and a chin detector (referred to as the helmet-chin PET), which is designed to maximize the solid angle by increasing the number of lines-of-response in the hemisphere. The parallax error, which PET scanners with a large solid angle tend to have, can be suppressed by the use of depth-of-interaction detectors. In this study, we carry out a realistic evaluation of the helmet-chin PET using Monte Carlo simulation based on the 4-layer GSO detector which consists of a 16  ×  16  ×  4 array of crystals with dimensions of 2.8  ×  2.8  ×  7.5 mm3. The purpose of this simulation is to show the gain in imaging performance of the helmet-chin PET compared with the cylindrical PET using the same number of detectors in each configuration. The sensitivity of the helmet-chin PET evaluated with a cylindrical phantom has a significant increase, especially at the top of the (field-of-view) FOV. The peak-NECR of the helmet-chin PET is 1.4 times higher compared to the cylindrical PET. The helmet-chin PET provides relatively low noise images throughout the FOV compared to the cylindrical PET which exhibits enhanced noise at the peripheral regions. The results show the helmet-chin PET can significantly improve the sensitivity and reduce the noise in the reconstructed images.

  19. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  20. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  1. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  2. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  3. A comparative study of scale-adaptive and large-eddy simulations of highly swirling turbulent flow through an abrupt expansion

    International Nuclear Information System (INIS)

    Javadi, Ardalan; Nilsson, Håkan

    2014-01-01

    The strongly swirling turbulent flow through an abrupt expansion is investigated using highly resolved LES and SAS, to shed more light on the stagnation region and the helical vortex breakdown. The vortex breakdown in an abrupt expansion resembles the so-called vortex rope occurring in hydro power draft tubes. It is known that the large-scale helical vortex structures can be captured by regular RANS turbulence models. However, the spurious suppression of the small-scale structures should be avoided using less diffusive methods. The present work compares LES and SAS results with the experimental measurement of Dellenback et al. (1988). The computations are conducted using a general non-orthogonal finite-volume method with a fully collocated storage available in the OpenFOAM-2.1.x CFD code. The dynamics of the flow is studied at two Reynolds numbers, Re=6.0×10 4 and Re=10 5 , at the almost constant high swirl numbers of Sr=1.16 and Sr=1.23, respectively. The time-averaged velocity and pressure fields and the root mean square of the velocity fluctuations, are captured and investigated qualitatively. The flow with the lower Reynolds number gives a much weaker outburst although the frequency of the structures seems to be constant for the plateau swirl number

  4. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  5. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  6. A large electrically excited synchronous generator

    DEFF Research Database (Denmark)

    2014-01-01

    This invention relates to a large electrically excited synchronous generator (100), comprising a stator (101), and a rotor or rotor coreback (102) comprising an excitation coil (103) generating a magnetic field during use, wherein the rotor or rotor coreback (102) further comprises a plurality...... adjacent neighbouring poles. In this way, a large electrically excited synchronous generator (EESG) is provided that readily enables a relatively large number of poles, compared to a traditional EESG, since the excitation coil in this design provides MMF for all the poles, whereas in a traditional EESG...... each pole needs its own excitation coil, which limits the number of poles as each coil will take up too much space between the poles....

  7. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  8. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  9. A comparative analysis of the statistical properties of large mobile phone calling networks.

    Science.gov (United States)

    Li, Ming-Xia; Jiang, Zhi-Qiang; Xie, Wen-Jie; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N

    2014-05-30

    Mobile phone calling is one of the most widely used communication methods in modern society. The records of calls among mobile phone users provide us a valuable proxy for the understanding of human communication patterns embedded in social networks. Mobile phone users call each other forming a directed calling network. If only reciprocal calls are considered, we obtain an undirected mutual calling network. The preferential communication behavior between two connected users can be statistically tested and it results in two Bonferroni networks with statistically validated edges. We perform a comparative analysis of the statistical properties of these four networks, which are constructed from the calling records of more than nine million individuals in Shanghai over a period of 110 days. We find that these networks share many common structural properties and also exhibit idiosyncratic features when compared with previously studied large mobile calling networks. The empirical findings provide us an intriguing picture of a representative large social network that might shed new lights on the modelling of large social networks.

  10. How comparable are size-resolved particle number concentrations from different instruments?

    Science.gov (United States)

    Hornsby, K. E.; Pryor, S. C.

    2012-12-01

    The need for comparability of particle size resolved measurements originates from multiple drivers including: (i) Recent suggestions that air quality standards for particulate matter should migrate from being mass-based to incorporating number concentrations. This move would necessarily be predicated on measurement comparability which is absolutely critical to compliance determination. (ii) The need to quantify and diagnose causes of variability in nucleation and growth rates in nano-particle experiments conducted in different locations. (iii) Epidemiological research designed to identify key parameters in human health responses to fine particle exposure. Here we present results from a detailed controlled laboratory instrument inter-comparison experiment designed to investigate data comparability in the size range of 2.01-523.3 nm across a range of particle composition, modal diameter and absolute concentration. Particle size distributions were generated using a TSI model 3940 Aerosol Generation System (AGS) diluted using zero air, and sampled using four TSI Scanning Mobility Particle Spectrometer (SMPS) configurations and a TSI model 3091 Fast Mobility Particle Sizer (FMPS). The SMPS configurations used two Electrostatic Classifiers (EC) (model 3080) attached to either a Long DMA (LDMA) (model 3081) or a Nano DMA (NDMA) (model 3085) plumbed to either a TSI model 3025A Butanol Condensed Particle Counting (CPC) or a TSI model 3788 Water CPC. All four systems were run using both high and low flow conditions, and were operated with both the internal diffusion loss and multiple charge corrections turned on. The particle compositions tested were sodium chloride, ammonium nitrate and olive oil diluted in ethanol. Particles of all three were generated at three peak concentration levels (spanning the range observed at our experimental site), and three modal particle diameters. Experimental conditions were maintained for a period of 20 minutes to ensure experimental

  11. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  12. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  13. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  14. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  15. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  16. A Comparative Analysis of Numbers and Biology Content Domains between Turkey and the USA

    Science.gov (United States)

    Incikabi, Lutfi; Ozgelen, Sinan; Tjoe, Hartono

    2012-01-01

    This study aimed to compare Mathematics and Science programs focusing on TIMSS content domains of Numbers and Biology that produced the largest achievement gap among students from Turkey and the USA. Specifically, it utilized the content analysis method within Turkish and New York State (NYS) frameworks. The procedures of study included matching…

  17. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  18. Regime shifts in demersal assemblages of the Benguela Current Large Marine Ecosystem: a comparative assessment

    DEFF Research Database (Denmark)

    Kirkman, Stephen P.; Yemane, Dawit; Atkinson, Lara J.

    2015-01-01

    Using long‐term survey data, changes in demersal faunal communities in the Benguela Current Large Marine Ecosystem were analysed at community and population levels to provide a comparative overview of the occurrence and timing of regime shifts. For South Africa, the timing of a community‐level sh......Using long‐term survey data, changes in demersal faunal communities in the Benguela Current Large Marine Ecosystem were analysed at community and population levels to provide a comparative overview of the occurrence and timing of regime shifts. For South Africa, the timing of a community...

  19. Managerial span of control: a pilot study comparing departmental complexity and number of direct reports.

    Science.gov (United States)

    Merrill, Katreena Collette; Pepper, Ginette; Blegen, Mary

    2013-09-01

    Nurse managers play pivotal roles in hospitals. However, restructuring has resulted in nurse managers having wider span of control and reduced visibility. The purpose of this pilot study was to compare two methods of measuring span of control: departmental complexity and number of direct reports. Forty-one nurse managers across nine hospitals completed The Ottawa Hospital Clinical Manager Span of Control Tool (TOH-SOC) and a demographic survey. A moderate positive relationship between number of direct reports and departmental complexity score was identified (r=.49, p=managers' responsibility. Copyright © 2013 Longwoods Publishing.

  20. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    Science.gov (United States)

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  2. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  3. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  4. The effects of large beach debris on nesting sea turtles

    Science.gov (United States)

    Fujisaki, Ikuko; Lamont, Margaret M.

    2016-01-01

    A field experiment was conducted to understand the effects of large beach debris on sea turtle nesting behavior as well as the effectiveness of large debris removal for habitat restoration. Large natural and anthropogenic debris were removed from one of three sections of a sea turtle nesting beach and distributions of nests and false crawls (non-nesting crawls) in pre- (2011–2012) and post- (2013–2014) removal years in the three sections were compared. The number of nests increased 200% and the number of false crawls increased 55% in the experimental section, whereas a corresponding increase in number of nests and false crawls was not observed in the other two sections where debris removal was not conducted. The proportion of nest and false crawl abundance in all three beach sections was significantly different between pre- and post-removal years. The nesting success, the percent of successful nests in total nesting attempts (number of nests + false crawls), also increased from 24% to 38%; however the magnitude of the increase was comparably small because both the number of nests and false crawls increased, and thus the proportion of the nesting success in the experimental beach in pre- and post-removal years was not significantly different. The substantial increase in sea turtle nesting activities after the removal of large debris indicates that large debris may have an adverse impact on sea turtle nesting behavior. Removal of large debris could be an effective restoration strategy to improve sea turtle nesting.

  5. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  6. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  7. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  8. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  9. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  10. The large-scale blast score ratio (LS-BSR pipeline: a method to rapidly compare genetic content between bacterial genomes

    Directory of Open Access Journals (Sweden)

    Jason W. Sahl

    2014-04-01

    Full Text Available Background. As whole genome sequence data from bacterial isolates becomes cheaper to generate, computational methods are needed to correlate sequence data with biological observations. Here we present the large-scale BLAST score ratio (LS-BSR pipeline, which rapidly compares the genetic content of hundreds to thousands of bacterial genomes, and returns a matrix that describes the relatedness of all coding sequences (CDSs in all genomes surveyed. This matrix can be easily parsed in order to identify genetic relationships between bacterial genomes. Although pipelines have been published that group peptides by sequence similarity, no other software performs the rapid, large-scale, full-genome comparative analyses carried out by LS-BSR.Results. To demonstrate the utility of the method, the LS-BSR pipeline was tested on 96 Escherichia coli and Shigella genomes; the pipeline ran in 163 min using 16 processors, which is a greater than 7-fold speedup compared to using a single processor. The BSR values for each CDS, which indicate a relative level of relatedness, were then mapped to each genome on an independent core genome single nucleotide polymorphism (SNP based phylogeny. Comparisons were then used to identify clade specific CDS markers and validate the LS-BSR pipeline based on molecular markers that delineate between classical E. coli pathogenic variant (pathovar designations. Scalability tests demonstrated that the LS-BSR pipeline can process 1,000 E. coli genomes in 27–57 h, depending upon the alignment method, using 16 processors.Conclusions. LS-BSR is an open-source, parallel implementation of the BSR algorithm, enabling rapid comparison of the genetic content of large numbers of genomes. The results of the pipeline can be used to identify specific markers between user-defined phylogenetic groups, and to identify the loss and/or acquisition of genetic information between bacterial isolates. Taxa-specific genetic markers can then be translated

  11. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  12. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  13. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  14. An initial comparative map of copy number variations in the goat (Capra hircus genome

    Directory of Open Access Journals (Sweden)

    Casadio Rita

    2010-11-01

    Full Text Available Abstract Background The goat (Capra hircus represents one of the most important farm animal species. It is reared in all continents with an estimated world population of about 800 million of animals. Despite its importance, studies on the goat genome are still in their infancy compared to those in other farm animal species. Comparative mapping between cattle and goat showed only a few rearrangements in agreement with the similarity of chromosome banding. We carried out a cross species cattle-goat array comparative genome hybridization (aCGH experiment in order to identify copy number variations (CNVs in the goat genome analysing animals of different breeds (Saanen, Camosciata delle Alpi, Girgentana, and Murciano-Granadina using a tiling oligonucleotide array with ~385,000 probes designed on the bovine genome. Results We identified a total of 161 CNVs (an average of 17.9 CNVs per goat, with the largest number in the Saanen breed and the lowest in the Camosciata delle Alpi goat. By aggregating overlapping CNVs identified in different animals we determined CNV regions (CNVRs: on the whole, we identified 127 CNVRs covering about 11.47 Mb of the virtual goat genome referred to the bovine genome (0.435% of the latter genome. These 127 CNVRs included 86 loss and 41 gain and ranged from about 24 kb to about 1.07 Mb with a mean and median equal to 90,292 bp and 49,530 bp, respectively. To evaluate whether the identified goat CNVRs overlap with those reported in the cattle genome, we compared our results with those obtained in four independent cattle experiments. Overlapping between goat and cattle CNVRs was highly significant (P Conclusions We describe a first map of goat CNVRs. This provides information on a comparative basis with the cattle genome by identifying putative recurrent interspecies CNVs between these two ruminant species. Several goat CNVs affect genes with important biological functions. Further studies are needed to evaluate the

  15. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  16. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  17. Reynolds-number dependence of turbulence enhancement on collision growth

    Directory of Open Access Journals (Sweden)

    R. Onishi

    2016-10-01

    Full Text Available This study investigates the Reynolds-number dependence of turbulence enhancement on the collision growth of cloud droplets. The Onishi turbulent coagulation kernel proposed in Onishi et al. (2015 is updated by using the direct numerical simulation (DNS results for the Taylor-microscale-based Reynolds number (Reλ up to 1140. The DNS results for particles with a small Stokes number (St show a consistent Reynolds-number dependence of the so-called clustering effect with the locality theory proposed by Onishi et al. (2015. It is confirmed that the present Onishi kernel is more robust for a wider St range and has better agreement with the Reynolds-number dependence shown by the DNS results. The present Onishi kernel is then compared with the Ayala–Wang kernel (Ayala et al., 2008a; Wang et al., 2008. At low and moderate Reynolds numbers, both kernels show similar values except for r2 ∼ r1, for which the Ayala–Wang kernel shows much larger values due to its large turbulence enhancement on collision efficiency. A large difference is observed for the Reynolds-number dependences between the two kernels. The Ayala–Wang kernel increases for the autoconversion region (r1, r2 < 40 µm and for the accretion region (r1 < 40 and r2 > 40 µm; r1 > 40 and r2 < 40 µm as Reλ increases. In contrast, the Onishi kernel decreases for the autoconversion region and increases for the rain–rain self-collection region (r1, r2 > 40 µm. Stochastic collision–coalescence equation (SCE simulations are also conducted to investigate the turbulence enhancement on particle size evolutions. The SCE with the Ayala–Wang kernel (SCE-Ayala and that with the present Onishi kernel (SCE-Onishi are compared with results from the Lagrangian Cloud Simulator (LCS; Onishi et al., 2015, which tracks individual particle motions and size evolutions in homogeneous isotropic turbulence. The SCE-Ayala and SCE-Onishi kernels show consistent

  18. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  19. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  20. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  1. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  2. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  3. Prospecting direction and favourable target areas for exploration of large and super-large uranium deposits in China

    International Nuclear Information System (INIS)

    Liu Xingzhong

    1993-01-01

    A host of large uranium deposits have been successively discovered abroad by means of geological exploration, metallogenetic model studies and the application of new geophysical and geochemical methods since 1970's. Thorough undertaking geological research relevant to prospecting for super large uranium deposits have attracted great attention of the worldwide geological circle. The important task for the vast numbers of uranium geological workers is to make an afford to discover more numerous large and super large uranium deposits in China. The author comprehensively analyses the regional geological setting and geological metallogenetic conditions for the super large uranium deposits in the world. Comparative studies have been undertaken and the prospecting direction and favourable target areas for the exploration of super large uranium deposits in China have been proposed

  4. Hot-ion Bernstein wave with large kparallel

    International Nuclear Information System (INIS)

    Ignat, D.W.; Ono, M.

    1995-01-01

    The complex roots of the hot plasma dispersion relation in the ion cyclotron range of frequencies have been surveyed. Progressing from low to high values of perpendicular wave number k perpendicular we find first the cold plasma fast wave and then the well-known Bernstein wave, which is characterized by large dispersion, or large changes in k perpendicular for small changes in frequency or magnetic field. At still higher k perpendicular there can be two hot plasma waves with relatively little dispersion. The latter waves exist only for relatively large k parallel, the wave number parallel to the magnetic field, and are strongly damped unless the electron temperature is low compared to the ion temperature. Up to three mode conversions appear to be possible, but two mode conversions are seen consistently

  5. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  6. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2017-01-01

    Full Text Available The analysis of various non-destructive methods to control fissile materials (FM in large-size containers filled with radioactive waste (RAW has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one.

  7. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  8. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  9. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  10. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  11. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  12. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Science.gov (United States)

    Batyaev, V. F.; Sklyarov, S. V.

    2017-09-01

    The analysis of various non-destructive methods to control fissile materials (FM) in large-size containers filled with radioactive waste (RAW) has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one. Note to the reader: the pdf file has been changed on September 22, 2017.

  13. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  14. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  15. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  16. YBYRÁ facilitates comparison of large phylogenetic trees.

    Science.gov (United States)

    Machado, Denis Jacob

    2015-07-01

    The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .

  17. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  18. Number of X-ray examinations performed on paediatric and geriatric patients compared with adult patients

    International Nuclear Information System (INIS)

    Aroua, A.; Bochud, F. O.; Valley, J. F.; Vader, J. P.; Verdun, F. R.

    2007-01-01

    The age of the patient is of prime importance when assessing the radiological risk to patients due to medical X-ray exposures and the total detriment to the population due to radiodiagnostics. In order to take into account the age-specific radiosensitivity, three age groups are considered: children, adults and the elderly. In this work, the relative number of examinations carried out on paediatric and geriatric patients is established, compared with adult patients, for radiodiagnostics as a whole, for dental and medical radiology, for 8 radiological modalities as well as for 40 types of X-ray examinations. The relative numbers of X-ray examinations are determined based on the corresponding age distributions of patients and that of the general population. Two broad groups of X-ray examinations may be defined. Group A comprises conventional radiography, fluoroscopy and computed tomography; for this group a paediatric patient undergoes half the number of examinations as that of an adult, and a geriatric patient undergoes 2.5 times more. Group B comprises angiography and interventional procedures; for this group a paediatric patient undergoes a one-fourth of the number of examinations carried out on an adult, and a geriatric patient undergoes five times more. (authors)

  19. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  20. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  1. Human behaviour can trigger large carnivore attacks in developed countries.

    Science.gov (United States)

    Penteriani, Vincenzo; Delgado, María del Mar; Pinchera, Francesco; Naves, Javier; Fernández-Gil, Alberto; Kojola, Ilpo; Härkönen, Sauli; Norberg, Harri; Frank, Jens; Fedriani, José María; Sahlén, Veronica; Støen, Ole-Gunnar; Swenson, Jon E; Wabakken, Petter; Pellegrini, Mario; Herrero, Stephen; López-Bao, José Vicente

    2016-02-03

    The media and scientific literature are increasingly reporting an escalation of large carnivore attacks on humans in North America and Europe. Although rare compared to human fatalities by other wildlife, the media often overplay large carnivore attacks on humans, causing increased fear and negative attitudes towards coexisting with and conserving these species. Although large carnivore populations are generally increasing in developed countries, increased numbers are not solely responsible for the observed rise in the number of attacks by large carnivores. Here we show that an increasing number of people are involved in outdoor activities and, when doing so, some people engage in risk-enhancing behaviour that can increase the probability of a risky encounter and a potential attack. About half of the well-documented reported attacks have involved risk-enhancing human behaviours, the most common of which is leaving children unattended. Our study provides unique insight into the causes, and as a result the prevention, of large carnivore attacks on people. Prevention and information that can encourage appropriate human behaviour when sharing the landscape with large carnivores are of paramount importance to reduce both potentially fatal human-carnivore encounters and their consequences to large carnivores.

  2. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  3. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  4. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  5. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  6. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  7. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  8. Rational-number comparison across notation: Fractions, decimals, and whole numbers.

    Science.gov (United States)

    Hurst, Michelle; Cordes, Sara

    2016-02-01

    Although fractions, decimals, and whole numbers can be used to represent the same rational-number values, it is unclear whether adults conceive of these rational-number magnitudes as lying along the same ordered mental continuum. In the current study, we investigated whether adults' processing of rational-number magnitudes in fraction, decimal, and whole-number notation show systematic ratio-dependent responding characteristic of an integrated mental continuum. Both reaction time (RT) and eye-tracking data from a number-magnitude comparison task revealed ratio-dependent performance when adults compared the relative magnitudes of rational numbers, both within the same notation (e.g., fractions vs. fractions) and across different notations (e.g., fractions vs. decimals), pointing to an integrated mental continuum for rational numbers across notation types. In addition, eye-tracking analyses provided evidence of an implicit whole-number bias when we compared values in fraction notation, and individual differences in this whole-number bias were related to the individual's performance on a fraction arithmetic task. Implications of our results for both cognitive development research and math education are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats.

    Science.gov (United States)

    Armour, John A L; Palla, Raquel; Zeeuwen, Patrick L J M; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies.

  10. Baryon number fluctuations in quasi-particle model

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ameng [Southeast University Chengxian College, Department of Foundation, Nanjing (China); Luo, Xiaofeng [Central China Normal University, Key Laboratory of Quark and Lepton Physics (MOE), Institute of Particle Physics, Wuhan (China); Zong, Hongshi [Nanjing University, Department of Physics, Nanjing (China); Joint Center for Particle, Nuclear Physics and Cosmology, Nanjing (China); Institute of Theoretical Physics, CAS, State Key Laboratory of Theoretical Physics, Beijing (China)

    2017-04-15

    Baryon number fluctuations are sensitive to the QCD phase transition and the QCD critical point. According to the Feynman rules of finite-temperature field theory, we calculated various order moments and cumulants of the baryon number distributions in the quasi-particle model of the quark-gluon plasma. Furthermore, we compared our results with the experimental data measured by the STAR experiment at RHIC. It is found that the experimental data can be well described by the model for the colliding energies above 30 GeV and show large discrepancies at low energies. This puts a new constraint on the qQGP model and also provides a baseline for the QCD critical point search in heavy-ion collisions at low energies. (orig.)

  11. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  12. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  13. An approach to solve group-decision-making problems with ordinal interval numbers.

    Science.gov (United States)

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  14. SOLAR CYCLE 24: CURIOUS CHANGES IN THE RELATIVE NUMBERS OF SUNSPOT GROUP TYPES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Ozguc, A.; Rozelot, J. P.

    2014-01-01

    Here, we analyze different sunspot group (SG) behaviors from the points of view of both the sunspot counts (SSCs) and the number of SGs, in four categories, for the time period of 1982 January-2014 May. These categories include data from simple (A and B), medium (C), large (D, E, and F), and decaying (H) SGs. We investigate temporal variations of all data sets used in this study and find the following results. (1) There is a very significant decrease in the large groups' SSCs and the number of SGs in solar cycle 24 (cycle 24) compared to cycles 21-23. (2) There is no strong variation in the decaying groups' data sets for the entire investigated time interval. (3) Medium group data show a gradual decrease for the last three cycles. (4) A significant decrease occurred in the small groups during solar cycle 23, while no strong changes show in the current cycle (cycle 24) compared to the previous ones. We confirm that the temporal behavior of all categories is quite different from cycle to cycle and it is especially flagrant in solar cycle 24. Thus, we argue that the reduced absolute number of the large SGs is largely, if not solely, responsible for the weak cycle 24. These results might be important for long-term space weather predictions to understand the rate of formation of different groups of sunspots during a solar cycle and the possible consequences for the long-term geomagnetic activity

  15. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  16. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  17. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  18. Higher first Chern numbers in one-dimensional Bose-Fermi mixtures

    Science.gov (United States)

    Knakkergaard Nielsen, Kristian; Wu, Zhigang; Bruun, G. M.

    2018-02-01

    We propose to use a one-dimensional system consisting of identical fermions in a periodically driven lattice immersed in a Bose gas, to realise topological superfluid phases with Chern numbers larger than 1. The bosons mediate an attractive induced interaction between the fermions, and we derive a simple formula to analyse the topological properties of the resulting pairing. When the coherence length of the bosons is large compared to the lattice spacing and there is a significant next-nearest neighbour hopping for the fermions, the system can realise a superfluid with Chern number ±2. We show that this phase is stable in a large region of the phase diagram as a function of the filling fraction of the fermions and the coherence length of the bosons. Cold atomic gases offer the possibility to realise the proposed system using well-known experimental techniques.

  19. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  20. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  1. Comparing direct and iterative equation solvers in a large structural analysis software system

    Science.gov (United States)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  2. Cloud computing for comparative genomics.

    Science.gov (United States)

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  3. Interaction between numbers and size during visual search

    OpenAIRE

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2016-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numeric...

  4. Effect of unequal fuel and oxidizer Lewis numbers on flame dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shamim, Tariq [Department of Mechanical Engineering, The University of Michigan-Dearborn, Dearborn, MI 48128-1491 (United States)

    2006-12-15

    The interaction of non-unity Lewis number (due to preferential diffusion and/or unequal rates of heat and mass transfer) with the coupled effect of radiation, chemistry and unsteadiness alters several characteristics of a flame. The present study numerically investigates this interaction with a particular emphasis on the effect of unequal and non-unity fuel and oxidizer Lewis numbers in a transient diffusion flame. The unsteadiness is simulated by considering the flame subjected to modulations in reactant concentration. Flames with different Lewis numbers (ranging from 0.5 to 2) and subjected to different modulating frequencies are considered. The results show that the coupled effect of Lewis number and unsteadiness strongly influences the flame dynamics. The impact is stronger at high modulating frequencies and strain rates, particularly for large values of Lewis numbers. Compared to the oxidizer side Lewis number, the fuel side Lewis number has greater influence on flame dynamics. (author)

  5. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  6. p-adic numbers

    OpenAIRE

    Grešak, Rozalija

    2015-01-01

    The field of real numbers is usually constructed using Dedekind cuts. In these thesis we focus on the construction of the field of real numbers using metric completion of rational numbers using Cauchy sequences. In a similar manner we construct the field of p-adic numbers, describe some of their basic and topological properties. We follow by a construction of complex p-adic numbers and we compare them with the ordinary complex numbers. We conclude the thesis by giving a motivation for the int...

  7. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  8. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  9. Signals of lepton number violation

    CERN Document Server

    Panella, O; Srivastava, Y N

    1999-01-01

    The production of like-sign-dileptons (LSD), in the high energy lepton number violating ( Delta L=+2) reaction, pp to 2jets+l/sup +/l /sup +/, (l=e, mu , tau ), of interest for the experiments to be performed at the forthcoming Large Hadron Collider (LHC), is reported, taking up a composite model scenario in which the exchanged virtual composite neutrino is assumed to be a Majorana particle. Numerical estimates of the corresponding signal cross-section that implement kinematical cuts needed to suppress the standard model background, are presented which show that in some regions of the parameter space the total number of LSD events is well above the background. Assuming non-observation of the LSD signal it is found that LHC would exclude a composite Majorana neutrino up to 700 GeV (if one requires 10 events for discovery). The sensitivity of LHC experiments to the parameter space is then compared to that of the next generation of neutrinoless double beta decay ( beta beta /sub 0 nu /) experiment, GENIUS, and i...

  10. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  11. Low-Reynolds Number Effects in Ventilated Rooms

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.; Topp, Claus

    In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number.......In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number....

  12. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  13. Noise Pulses in Large Area Optical Modules

    International Nuclear Information System (INIS)

    Aiello, Sebastiano; Leonora, Emanuele; Giordano, Valentina

    2013-06-01

    A great number of large area photomultipliers are widely used in neutrino and astro-particle detector to measure Cherenkov light in medium like water or ice. The key element of these detectors are the so-called 'optical module', which consist in photodetectors closed in a transparent pressure-resistant container to protect it and ensure good light transmission. The noise pulses present on the anode of each photomultiplier affect strongly the performance of the detector. A large study was conducted on noise pulses of large area photomultipliers, considering time and charge distributions of dark pulses, prepulses, delayed pulses, and after pulses. The contribution to noise pulses due to the presence of the external glass spheres was also studied, even comparing two vessels of different brands. (authors)

  14. Methods and models used in comparative risk studies

    International Nuclear Information System (INIS)

    Devooght, J.

    1983-01-01

    Comparative risk studies make use of a large number of methods and models based upon a set of assumptions incompletely formulated or of value judgements. Owing to the multidimensionality of risks and benefits, the economic and social context may notably influence the final result. Five classes of models are briefly reviewed: accounting of fluxes of effluents, radiation and energy; transport models and health effects; systems reliability and bayesian analysis; economic analysis of reliability and cost-risk-benefit analysis; decision theory in presence of uncertainty and multiple objectives. Purpose and prospect of comparative studies are assessed in view of probable diminishing returns for large generic comparisons [fr

  15. Cloud computing for comparative genomics

    Directory of Open Access Journals (Sweden)

    Pivovarov Rimma

    2010-05-01

    Full Text Available Abstract Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD, to run within Amazon's Elastic Computing Cloud (EC2. We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  16. Comparative Assessments of the Seasonality in "The Total Number of Overnight Stays" in Romania, Bulgaria and the European Union

    Directory of Open Access Journals (Sweden)

    Jugănaru Ion Dănuț

    2017-01-01

    For the quantitative research carried out in this study, we processed a database consisting of the monthly values of “the total number of overnight stays” indicator, recorded between January 2005 and December 2016, using the moving average method, the seasonality coefficient and EViews 5. The results led to the formulation of comparative assessments regarding the seasonality in the tourism activities from Romania and Bulgaria and their situation compared to the average of the seasonality recorded in the EU.

  17. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  18. Comparing centralised and decentralised anaerobic digestion of stillage from a large-scale bioethanol plant to animal feed production.

    Science.gov (United States)

    Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R

    2008-01-01

    A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.

  19. Number to finger mapping is topological.

    NARCIS (Netherlands)

    Plaisier, M.A.; Smeets, J.B.J.

    2011-01-01

    It has been shown that humans associate fingers with numbers because finger counting strategies interact with numerical judgements. At the same time, there is evidence that there is a relation between number magnitude and space as small to large numbers seem to be represented from left to right. In

  20. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  1. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  2. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  3. Comparative study of 6 MV and 15 MV treatment plans for large chest wall irradiation

    International Nuclear Information System (INIS)

    Prasana Sarathy, N.; Kothanda Raman, S.; Sen, Dibyendu; Pal, Bipasha

    2007-01-01

    Conventionally, opposed tangential fields are used for the treatment of chest wall irradiation. If the chest wall is treated in the linac, 4 or 6 MV photons will be the energy of choice. It is a welI-established rule that for chest wall separations up to 22 cm, one can use mid-energies, with acceptable volume of hot spots. For larger patient sizes (22 cm and above), mid-energy beams produce hot spots over large volumes. The purpose of this work is to compare plans made with 6 and 15 MV photons, for patients with large chest wall separations. The obvious disadvantage in using high-energy photons for chest wall irradiation is inadequate dose to the skin. But this can be compensated by using a bolus of suitable thickness

  4. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    Science.gov (United States)

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  5. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  6. Normal zone detectors for a large number of inductively coupled coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed

  7. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  8. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  9. Clinical significance of rare copy number variations in epilepsy: a case-control survey using microarray-based comparative genomic hybridization.

    Science.gov (United States)

    Striano, Pasquale; Coppola, Antonietta; Paravidino, Roberta; Malacarne, Michela; Gimelli, Stefania; Robbiano, Angela; Traverso, Monica; Pezzella, Marianna; Belcastro, Vincenzo; Bianchi, Amedeo; Elia, Maurizio; Falace, Antonio; Gazzerro, Elisabetta; Ferlazzo, Edoardo; Freri, Elena; Galasso, Roberta; Gobbi, Giuseppe; Molinatto, Cristina; Cavani, Simona; Zuffardi, Orsetta; Striano, Salvatore; Ferrero, Giovanni Battista; Silengo, Margherita; Cavaliere, Maria Luigia; Benelli, Matteo; Magi, Alberto; Piccione, Maria; Dagna Bricarelli, Franca; Coviello, Domenico A; Fichera, Marco; Minetti, Carlo; Zara, Federico

    2012-03-01

    To perform an extensive search for genomic rearrangements by microarray-based comparative genomic hybridization in patients with epilepsy. Prospective cohort study. Epilepsy centers in Italy. Two hundred seventy-nine patients with unexplained epilepsy, 265 individuals with nonsyndromic mental retardation but no epilepsy, and 246 healthy control subjects were screened by microarray-based comparative genomic hybridization. Identification of copy number variations (CNVs) and gene enrichment. Rare CNVs occurred in 26 patients (9.3%) and 16 healthy control subjects (6.5%) (P = .26). The CNVs identified in patients were larger (P = .03) and showed higher gene content (P = .02) than those in control subjects. The CNVs larger than 1 megabase (P = .002) and including more than 10 genes (P = .005) occurred more frequently in patients than in control subjects. Nine patients (34.6%) among those harboring rare CNVs showed rearrangements associated with emerging microdeletion or microduplication syndromes. Mental retardation and neuropsychiatric features were associated with rare CNVs (P = .004), whereas epilepsy type was not. The CNV rate in patients with epilepsy and mental retardation or neuropsychiatric features is not different from that observed in patients with mental retardation only. Moreover, significant enrichment of genes involved in ion transport was observed within CNVs identified in patients with epilepsy. Patients with epilepsy show a significantly increased burden of large, rare, gene-rich CNVs, particularly when associated with mental retardation and neuropsychiatric features. The limited overlap between CNVs observed in the epilepsy group and those observed in the group with mental retardation only as well as the involvement of specific (ion channel) genes indicate a specific association between the identified CNVs and epilepsy. Screening for CNVs should be performed for diagnostic purposes preferentially in patients with epilepsy and mental retardation or

  10. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  11. The redshift number density evolution of Mg II absorption systems

    International Nuclear Information System (INIS)

    Chen Zhi-Fu

    2013-01-01

    We make use of the recent large sample of 17 042 Mg II absorption systems from Quider et al. to analyze the evolution of the redshift number density. Regardless of the strength of the absorption line, we find that the evolution of the redshift number density can be clearly distinguished into three different phases. In the intermediate redshift epoch (0.6 ≲ z ≲ 1.6), the evolution of the redshift number density is consistent with the non-evolution curve, however, the non-evolution curve over-predicts the values of the redshift number density in the early (z ≲ 0.6) and late (z ≳ 1.6) epochs. Based on the invariant cross-section of the absorber, the lack of evolution in the redshift number density compared to the non-evolution curve implies the galaxy number density does not evolve during the middle epoch. The flat evolution of the redshift number density tends to correspond to a shallow evolution in the galaxy merger rate during the late epoch, and the steep decrease of the redshift number density might be ascribed to the small mass of halos during the early epoch.

  12. Enhancement of phase space density by increasing trap anisotropy in a magneto-optical trap with a large number of atoms

    International Nuclear Information System (INIS)

    Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.

    2004-01-01

    The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms

  13. Extension of electronic speckle correlation interferometry to large deformations

    Science.gov (United States)

    Sciammarella, Cesar A.; Sciammarella, Federico M.

    1998-07-01

    The process of fringe formation under simultaneous illumination in two orthogonal directions is analyzed. Procedures to extend the applicability of this technique to large deformation and high density of fringes are introduced. The proposed techniques are applied to a number of technical problems. Good agreement is obtained when the experimental results are compared with results obtained by other methods.

  14. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  15. COMPARATIVE CHARACTERISTIC OF TOURIST POTENTIAL OF MUSEUMS OF KRASNODAR REGION

    Directory of Open Access Journals (Sweden)

    Svetlana V. Kirilicheva

    2017-01-01

    Full Text Available Aim. The article describes the tourist potential of two large museums of the Krasnodar Territory, the Krasnodar State Historical and Archaeological Museum-Reserve named after E.D. Felitsyn and Krasnodar Regional Art Museum named after F.A. Kovalenko. Much attention is paid to the classification of museums in the Krasnodar Territory. Methods. In the study were used a comparative-geographical method, a systematic approach, an analysis of statistical-mathematical materials and an analysis of the leisure profile of citizens. Findings. A comparative assessment of the potential of two large museums of the region is given. We also conducted an analysis of the survey data of the leisure profile among the townspeople in the city of Krasnodar in order to identify which of the museums is more popular. The main indicators such as the number of storage units, the total exposition and exhibition area, the number of sightseeing visits and mass events, the number of educational programs and exhibitions, the number of employees were examined and analyzed. Distinctions between museums are also noted. Conclusions. An analysis of these data showed that both museums have sufficient tourist potential to represent the city and get acquainted with the city through museums. The results of an analysis of events held in museums to attract visitors are presented. The sufficient tourist potential of two large museums for representation of the city and region is defined. The directions for their development as objects of tourism are proposed.

  16. Changes in the number of nesting pairs and breeding success of theWhite Stork Ciconia ciconia in a large city and a neighbouring rural area in South-West Poland

    Directory of Open Access Journals (Sweden)

    Kopij Grzegorz

    2017-12-01

    Full Text Available During the years 1994–2009, the number of White Stork pairs breeding in the city of Wrocław (293 km2 fluctuated between 5 pairs in 1999 and 19 pairs 2004. Most nests were clumped in two sites in the Odra river valley. Two nests were located only cca. 1 km from the city hall. The fluctuations in numbers can be linked to the availability of feeding grounds and weather. In years when grass was mowed in the Odra valley, the number of White Storks was higher than in years when the grass was left unattended. Overall, the mean number of fledglings per successful pair during the years 1995–2009 was slightly higher in the rural than in the urban area. Contrary to expectation, the mean number of fledglings per successful pair was the highest in the year of highest population density. In two rural counties adjacent to Wrocław, the number of breeding pairs was similar to that in the city in 1994/95 (15 vs. 13 pairs. However, in 2004 the number of breeding pairs in the city almost doubled compared to that in the neighboring counties (10 vs. 19 pairs. After a sharp decline between 2004 and 2008, populations in both areas were similar in 2009 (5 vs. 4 pairs, but much lower than in 1994–1995. Wrocław is probably the only large city (>100,000 people in Poland, where the White Stork has developed a sizeable, although fluctuating, breeding population. One of the most powerful role the city-nesting White Storks may play is their ability to engage directly citizens with nature and facilitate in that way environmental education and awareness.

  17. Extensive error in the number of genes inferred from draft genome assemblies.

    Directory of Open Access Journals (Sweden)

    James F Denton

    2014-12-01

    Full Text Available Current sequencing methods produce large amounts of data, but genome assemblies based on these data are often woefully incomplete. These incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. In this paper we investigate the magnitude of the problem, both in terms of total gene number and the number of copies of genes in specific families. To do this, we compare multiple draft assemblies against higher-quality versions of the same genomes, using several new assemblies of the chicken genome based on both traditional and next-generation sequencing technologies, as well as published draft assemblies of chimpanzee. We find that upwards of 40% of all gene families are inferred to have the wrong number of genes in draft assemblies, and that these incorrect assemblies both add and subtract genes. Using simulated genome assemblies of Drosophila melanogaster, we find that the major cause of increased gene numbers in draft genomes is the fragmentation of genes onto multiple individual contigs. Finally, we demonstrate the usefulness of RNA-Seq in improving the gene annotation of draft assemblies, largely by connecting genes that have been fragmented in the assembly process.

  18. Number-unconstrained quantum sensing

    Science.gov (United States)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  19. Topologies of multiterminal HVDC-VSC transmission for large offshore wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Gomis-Bellmunt, Oriol [Centre d' Innovacio Tecnologica en Convertidors Estatics i Accionaments (CITCEA-UPC), Universitat Politecnica de Catalunya UPC, Av. Diagonal, 647, Pl. 2. 08028 Barcelona (Spain); IREC Catalonia Institute for Energy Research, Barcelona (Spain); Liang, Jun; Ekanayake, Janaka; King, Rosemary; Jenkins, Nicholas [School of Engineering, Cardiff University, Queen' s Buildings, The Parade, Cardiff CF24 3AA, Wales (United Kingdom)

    2011-02-15

    Topologies of multiterminal HVDC-VSC transmission systems for large offshore wind farms are investigated. System requirements for multiterminal HVDC are described, particularly the maximum power loss allowed in the event of a fault. Alternative control schemes and HVDC circuit topologies are reviewed, including the need for HVDC circuit breakers. Various topologies are analyzed and compared according to a number of criteria: number and capacity of HVDC circuits, number of HVDC circuit breakers, maximum power loss, flexibility, redundancy, lines utilization, need for offshore switching platforms and fast communications. (author)

  20. Quantum random-number generator based on a photon-number-resolving detector

    International Nuclear Information System (INIS)

    Ren Min; Wu, E; Liang Yan; Jian Yi; Wu Guang; Zeng Heping

    2011-01-01

    We demonstrated a high-efficiency quantum random number generator which takes inherent advantage of the photon number distribution randomness of a coherent light source. This scheme was realized by comparing the photon flux of consecutive pulses with a photon number resolving detector. The random bit generation rate could reach 2.4 MHz with a system clock of 6.0 MHz, corresponding to a random bit generation efficiency as high as 40%. The random number files passed all the stringent statistical tests.

  1. Visuospatial Priming of the Mental Number Line

    Science.gov (United States)

    Stoianov, Ivilin; Kramer, Peter; Umilta, Carlo; Zorzi, Marco

    2008-01-01

    It has been argued that numbers are spatially organized along a "mental number line" that facilitates left-hand responses to small numbers, and right-hand responses to large numbers. We hypothesized that whenever the representations of visual and numerical space are concurrently activated, interactions can occur between them, before response…

  2. Comparative Approaches to Genetic Discrimination: Chasing Shadows?

    Science.gov (United States)

    Joly, Yann; Feze, Ida Ngueng; Song, Lingqiao; Knoppers, Bartha M

    2017-05-01

    Genetic discrimination (GD) is one of the most pervasive issues associated with genetic research and its large-scale implementation. An increasing number of countries have adopted public policies to address this issue. Our research presents a worldwide comparative review and typology of these approaches. We conclude with suggestions for public policy development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  4. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  5. Validation of the flux number as scaling parameter for top-spray fluidised bed systems

    DEFF Research Database (Denmark)

    Hede, Peter Dybdahl; Bach, P.; Jensen, Anker Degn

    2008-01-01

    2SO4 using Dextrin as binder in three top-spray fluidised bed scales, i.e. a small-scale (type: GEA Aeromatic-Fielder Strea-1), medium-scale (type: Niro MP-1) and large-scale (type: GEA MP-2/3). Following the parameter guidelines adapted from the original patent description, the flux number....... Coating conditions with flux number values of 4.5 and 4.7 were however successful in terms of agglomeration tendency and match of particle size fractions, but indicated in addition a strong influence of nozzle pressure. The present paper suggests even narrower boundaries for the flux number compared...

  6. Comparative characteristics of electronic, cash and cashless money

    Directory of Open Access Journals (Sweden)

    Ксенія Романівна Петрофанова

    2017-12-01

    The study of the peculiarities of electronic money is accompanied by the discovery of a large number of theoretical and practical problems and separate discussion issues of important application significance. As the number of e-money users increases with the development of e-commerce, protecting their interests requires proper civil and financial regulation. Comparing electronic money with cash and non-cash money, we found that they, by combining the benefits of the other two forms of money, actually became the third specific monetary form

  7. Source-Independent Quantum Random Number Generation

    Science.gov (United States)

    Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng

    2016-01-01

    Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .

  8. Hyperreal Numbers for Infinite Divergent Series

    OpenAIRE

    Bartlett, Jonathan

    2018-01-01

    Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent serie...

  9. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    Science.gov (United States)

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright

  10. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  11. Comparative Genomics of Chrysochromulina Ericina Virus and Other Microalga-Infecting Large DNA Viruses Highlights Their Intricate Evolutionary Relationship with the Established Mimiviridae Family.

    Science.gov (United States)

    Gallot-Lavallée, Lucie; Blanc, Guillaume; Claverie, Jean-Michel

    2017-07-15

    Chrysochromulina ericina virus CeV-01B (CeV) was isolated from Norwegian coastal waters in 1998. Its icosahedral particle is 160 nm in diameter and encloses a 474-kb double-stranded DNA (dsDNA) genome. This virus, although infecting a microalga (the haptophyceae Haptolina ericina , formerly Chrysochromulina ericina ), is phylogenetically related to members of the Mimiviridae family, initially established with the acanthamoeba-infecting mimivirus and megavirus as prototypes. This family was later split into two genera ( Mimivirus and Cafeteriavirus ) following the characterization of a virus infecting the heterotrophic stramenopile Cafeteria roenbergensis (CroV). CeV, as well as two of its close relatives, which infect the unicellular photosynthetic eukaryotes Phaeocystis globosa (Phaeocystis globosa virus [PgV]) and Aureococcus anophagefferens (Aureococcus anophagefferens virus [AaV]), are currently unclassified by the International Committee on Viral Taxonomy (ICTV). The detailed comparative analysis of the CeV genome presented here confirms the phylogenetic affinity of this emerging group of microalga-infecting viruses with the Mimiviridae but argues in favor of their classification inside a distinct clade within the family. Although CeV, PgV, and AaV share more common features among them than with the larger Mimiviridae , they also exhibit a large complement of unique genes, attesting to their complex evolutionary history. We identified several gene fusion events and cases of convergent evolution involving independent lateral gene acquisitions. Finally, CeV possesses an unusual number of inteins, some of which are closely related despite being inserted in nonhomologous genes. This appears to contradict the paradigm of allele-specific inteins and suggests that the Mimiviridae are especially efficient in spreading inteins while enlarging their repertoire of homing genes. IMPORTANCE Although it infects the microalga Chrysochromulina ericina , CeV is more closely

  12. Intuitive numbers guide decisions

    Directory of Open Access Journals (Sweden)

    Ellen Peters

    2008-12-01

    Full Text Available Measuring reaction times to number comparisons is thought to reveal a processing stage in elementary numerical cognition linked to internal, imprecise representations of number magnitudes. These intuitive representations of the mental number line have been demonstrated across species and human development but have been little explored in decision making. This paper develops and tests hypotheses about the influence of such evolutionarily ancient, intuitive numbers on human decisions. We demonstrate that individuals with more precise mental-number-line representations are higher in numeracy (number skills consistent with previous research with children. Individuals with more precise representations (compared to those with less precise representations also were more likely to choose larger, later amounts over smaller, immediate amounts, particularly with a larger proportional difference between the two monetary outcomes. In addition, they were more likely to choose an option with a larger proportional but smaller absolute difference compared to those with less precise representations. These results are consistent with intuitive number representations underlying: a perceived differences between numbers, b the extent to which proportional differences are weighed in decisions, and, ultimately, c the valuation of decision options. Human decision processes involving numbers important to health and financial matters may be rooted in elementary, biological processes shared with other species.

  13. Total radiative width (Γγ) as a function of mass number A

    International Nuclear Information System (INIS)

    Huynh, V.D.; Barros, S. De; Chevillon, P.L.; Julien, J.; Poittevin, G. Le; Morgenstern, J.; Samour, C.

    1967-01-01

    The total radiative width Γ γ was measured accurately for a large number of nuclei. These values, which are important for reactor calculations, are difficult to determine. The fluctuations in Γ γ from resonance to resonance in the same nucleus are discussed in terms of level parity and the de-excitation scheme. The authors compare the experimental values with those predicted by theory. (author) [fr

  14. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  15. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  16. Large-scale Comparative Sentiment Analysis of News Articles

    OpenAIRE

    Wanner, Franz; Rohrdantz, Christian; Mansmann, Florian; Stoffel, Andreas; Oelke, Daniela; Krstajic, Milos; Keim, Daniel; Luo, Dongning; Yang, Jing; Atkinson, Martin

    2009-01-01

    Online media offers great possibilities to retrieve more news items than ever. In contrast to these technical developments, human capabilities to read all these news items have not increased likewise. To bridge this gap, this poster presents a visual analytics tool for conducting semi-automatic sentiment analysis of large news feeds. The tool retrieves and analyzes the news of two categories (Terrorist Attack and Natural Disasters) and news which belong to both categories of the Europe Media ...

  17. GenoSets: visual analytic methods for comparative genomics.

    Directory of Open Access Journals (Sweden)

    Aurora A Cain

    Full Text Available Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  18. Large Eddy Simulation of Film-Cooling Jets

    Science.gov (United States)

    Iourokina, Ioulia

    2005-11-01

    Large Eddy Simulation of inclined jets issuing into a turbulent boundary layer crossflow has been performed. The simulation models film-cooling experiments of Pietrzyk et al. (J. of. Turb., 1989), consisting of a large plenum feeding an array of jets inclined at 35° to the flat surface with a pitch 3D and L/D=3.5. The blowing ratio is 0.5 with unity density ratio. The numerical method used is a hybrid combining external compressible solver with a low-Mach number code for the plenum and film holes. Vorticity dynamics pertinent to jet-in-crossflow interactions is analyzed and three-dimensional vortical structures are revealed. Turbulence statistics are compared to the experimental data. The turbulence production due to shearing in the crossflow is compared to that within the jet hole. The influence of three-dimensional coherent structures on the wall heat transfer is investigated and strategies to increase film- cooling performance are discussed.

  19. A comparative economic assessment of hydrogen production from large central versus smaller distributed plant in a carbon constrained world

    International Nuclear Information System (INIS)

    Nguyen, Y.V.; Ngo, Y.A.; Tinkler, M.J.; Cowan, N.

    2003-01-01

    This paper compares the economics of producing hydrogen at large central plants versus smaller distributed plants at user sites. The economics of two types of central plant, each at 100 million standard cubic feet per day of hydrogen, based on electrolysis and natural gas steam reforming technologies, will be discussed. The additional cost of controlling CO 2 emissions from the natural gas steam reforming plant will be included in the analysis in order to satisfy the need to live in a future carbon constrained world. The cost of delivery of hydrogen from the large central plant to the user sites in a large metropolitan area will be highlighted, and the delivered cost will be compared to the cost from on-site distributed generation plants. Five types of distributed generation plants, based on proton exchange membrane, alkaline electrolysis and advanced steam reforming, will be analysed and discussed. Two criteria were used to rank various hydrogen production options, the cost of production and the price of hydrogen to achieve an acceptable return of investment. (author)

  20. The influence of the Kubo number on the transport of energetic particles

    International Nuclear Information System (INIS)

    Shalchi, A

    2016-01-01

    We discuss the interaction between charged energetic particles and magnetized plasmas by using analytical theory. Based on the unified nonlinear transport (UNLT) theory we compute the diffusion coefficient across a large scale magnetic field. To achieve analytical tractability we use a simple Gaussian approach to model the turbulent magnetic fields. We show that the perpendicular diffusion coefficient depends only on two parameters, namely the Kubo number and the parallel mean free path. We combine the aforementioned turbulence model with the UNLT theory and we solve the corresponding integral equation numerically to show how these two parameters control the perpendicular diffusion coefficient. Furthermore, we consider two extreme cases, namely the case of strong and suppressed pitch-angle scattering, respectively. For each case we consider small and large Kubo numbers to achieve a further simplification. All our analytical findings are compared with formulas which are known in diffusion theory. (paper)

  1. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    International Nuclear Information System (INIS)

    Hasegawa, K.; Lim, C.S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario

  2. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    Science.gov (United States)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  3. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    OpenAIRE

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  4. Navigating the complexities of qualitative comparative analysis: case numbers, necessity relations, and model ambiguities.

    Science.gov (United States)

    Thiem, Alrik

    2014-12-01

    In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.

  5. Large transverse momentum phenomena

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1977-09-01

    It is pointed out that it is particularly significant that the quantum numbers of the leading particles are strongly correlated with the quantum numbers of the incident hadrons indicating that the valence quarks themselves are transferred to large p/sub t/. The crucial question is how they get there. Various hadron reactions are discussed covering the structure of exclusive reactions, inclusive reactions, normalization of inclusive cross sections, charge correlations, and jet production at large transverse momentum. 46 references

  6. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  7. Discontinuous Galerkin methodology for Large-Eddy Simulations of wind turbine airfoils

    DEFF Research Database (Denmark)

    Frére, A.; Sørensen, Niels N.; Hillewaert, K.

    2016-01-01

    This paper aims at evaluating the potential of the Discontinuous Galerkin (DG) methodology for Large-Eddy Simulation (LES) of wind turbine airfoils. The DG method has shown high accuracy, excellent scalability and capacity to handle unstructured meshes. It is however not used in the wind energy...... sector yet. The present study aims at evaluating this methodology on an application which is relevant for that sector and focuses on blade section aerodynamics characterization. To be pertinent for large wind turbines, the simulations would need to be at low Mach numbers (M ≤ 0.3) where compressible...... at low and high Reynolds numbers and compares the results to state-of-the-art models used in industry, namely the panel method (XFOIL with boundary layer modeling) and Reynolds Averaged Navier-Stokes (RANS). At low Reynolds number (Re = 6 × 104), involving laminar boundary layer separation and transition...

  8. Comparing spatial regression to random forests for large environmental data sets

    Science.gov (United States)

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputatio...

  9. Evolutionary Pattern of N-Glycosylation Sequon Numbers  in Eukaryotic ABC Protein Superfamilies

    Directory of Open Access Journals (Sweden)

    R. Shyama Prasad Rao

    2010-02-01

    Full Text Available Many proteins contain a large number of NXS/T sequences (where X is any amino acid except proline which are the potential sites of asparagine (N linked glycosylation. However, the patterns of occurrence of these N-glycosylation sequons in related proteins or groups of proteins and their underlying causes have largely been unexplored. We computed the actual and probabilistic occurrence of NXS/T sequons in ABC protein superfamilies from eight diverse eukaryotic organisms. The ABC proteins contained significantly higher NXS/T sequon numbers compared to respective genome-wide average, but the sequon density was significantly lower owing to the increase in protein size and decrease in sequon specific amino acids. However, mammalian ABC proteins have significantly higher sequon density, and both serine and threonine containing sequons (NXS and NXT have been positively selected—against the recent findings of only threonine specific Darwinian selection of sequons in proteins. The occurrence of sequons was positively correlated with the frequency of sequon specific amino acids and negatively correlated with proline and the NPS/T sequences. Further, the NPS/T sequences were significantly higher than expected in plant ABC proteins which have the lowest number of NXS/T sequons. Accord- ingly, compared to overall proteins, N-glycosylation sequons in ABC protein superfamilies have a distinct pattern of occurrence, and the results are discussed in an evolutionary perspective.

  10. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  11. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  12. Comparing the Effectiveness of Self-Paced and Collaborative Frame-of-Reference Training on Rater Accuracy in a Large-Scale Writing Assessment

    Science.gov (United States)

    Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu

    2015-01-01

    There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…

  13. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  14. Comparative analysis of the number of scientific papers entered to INIS: the case Mexico versus Brazil-Argentina

    International Nuclear Information System (INIS)

    Contreras, T.J.; Botello C, R.

    1994-01-01

    A comparative analysis about the scientific papers that input INIS National Center in Mexico has enter to the International Nuclear Information System since 1976 to date is presented. It is emphasize the number of documents as well as the participant institutions and the subject diversity. The result shows that the input of documents on behalf of Mexico is low, if it is compared with the other two Latin American countries, Brazil and Argentina, if we consider the production of technical scientific information in the field of energy sources. For this reason and on the basis of the new tematic approaches of INIS related to the environmental, economic and social aspects of energy, it is pretend to create a formal engagement with the participant institution to perform the gathering of the generated documentation in order to remit it to CIDN for the inclusion in INIS. (Author)

  15. Large impacted upper ureteral calculi: A comparative study between retrograde ureterolithotripsy and percutaneous antegrade ureterolithotripsy in the modified lateral position.

    Science.gov (United States)

    Moufid, Kamal; Abbaka, Najib; Touiti, Driss; Adermouch, Latifa; Amine, Mohamed; Lezrek, Mohammed

    2013-07-01

    The treatment for patients with large impacted proximal ureteral stone remains controversial, especially at institutions with limited resources. The aim of this study is to compare and to evaluate the outcome and complications of two main treatment procedures for impacted proximal ureteral calculi, retrograde ureterolithotripsy (URS), and percutaneous antegrade ureterolithotripsy (Perc-URS). Our inclusion criteria were solitary, radiopaque calculi, >15 mm in size in a functioning renal unit. Only those patients in whom the attempt at passing a guidewire or catheter beyond the calculus failed were included in this study. Between January 2007 and July 2011, a total of 52 patients (13 women and 39 men) with large impacted upper-ureteral calculi >15 mm and meeting the inclusion criteria were selected. Of these, Perc-URS was done in 22 patients (group 1) while retrograde ureteroscopy was performed in 30 patients (group 2). We analyzed operative time, incidence of complications during and after surgery, the number of postoperative recovery days, median total costs associated per patient per procedure, and the stone-free rate immediately after 5 days and after 1 month. Bivariate analysis used the Student t-test and the Mann-Whitney test to compare two means and Chi-square and Fisher's exact tests to compare two percentages. The significance level was set at 0.05. The mean age was 42.3 years (range 22-69). The mean stone sizes (mm) were 34 ± 1.2 and 29.3 ± 1.8 mm in group 1 and 2, respectively. In the Perc-URS group, 21 patients (95.45%) had complete calculus clearance through a single tract in one session of percutaneous surgery, whereas in the URS group, only 20 patients (66.7%) had complete stone clearance (P = 0.007). The mean operative time was higher in the Perc-URS group compared to group 2 (66.5 ± 21.7 vs. 52.13 ± 17.3 min, respectively; P = 0.013). Complications encountered in group 1 included transient postoperative fever (2 pts) and simple urine outflow (2

  16. Source-Independent Quantum Random Number Generation

    Directory of Open Access Journals (Sweden)

    Zhu Cao

    2016-02-01

    Full Text Available Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5×10^{3}  bit/s.

  17. A large deviations approach to the transient of the Erlang loss model

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ridder, Annemarie

    2001-01-01

    This paper deals with the transient behavior of the Erlang loss model. After scaling both arrival rate and number of trunks, an asymptotic analysis of the blocking probability is given. Apart from that, the most likely path to blocking is given. Compared to Shwartz and Weiss [Large Deviations for

  18. Comparative analysis of large biomass & coal co-utilization units

    NARCIS (Netherlands)

    Liszka, M.; Nowak, G.; Ptasinski, K.J.; Favrat, D.; Marechal, F.

    2010-01-01

    The co-utilization of coal and biomass in large power units is considered in many countries (e.g. Poland) as fast and effective way of increasing renewable energy share in the fuel mix. Such a method of biomass use is especially suitable for power systems where solid fuels (hard coal, lignite) are

  19. Higher representations: Confinement and large N

    International Nuclear Information System (INIS)

    Sannino, Francesco

    2005-01-01

    We investigate the confining phase transition as a function of temperature for theories with dynamical fermions in the two index symmetric and antisymmetric representation of the gauge group. By studying the properties of the center of the gauge group we predict for an even number of colors a confining phase transition, if second order, to be in the universality class of Ising in three dimensions. This is due to the fact that the center group symmetry does not break completely for an even number of colors. For an odd number of colors the center group symmetry breaks completely. This pattern remains unaltered at a large number of colors. The confining/deconfining phase transition in these theories at large and finite N is not mapped in the one of super Yang-Mills theory. We extend the Polyakov loop effective theory to describe the confining phase transition of the theories studied here for a generic number of colors. Our results are not modified when adding matter in the same higher dimensional representations of the gauge group. We comment on the interplay between confinement and chiral symmetry in these theories and suggest that they are ideal laboratories to shed light on this issue also for ordinary QCD. We compare the free energy as a function of temperature for different theories. We find that the conjectured thermal inequality between the infrared and ultraviolet degrees of freedom computed using the free energy does not lead to new constraints on asymptotically free theories with fermions in higher dimensional representations of the gauge group. Since the center of the gauge group is an important quantity for the confinement properties at zero temperature our results are relevant here as well

  20. arrayCGHbase: an analysis platform for comparative genomic hybridization microarrays

    Directory of Open Access Journals (Sweden)

    Moreau Yves

    2005-05-01

    Full Text Available Abstract Background The availability of the human genome sequence as well as the large number of physically accessible oligonucleotides, cDNA, and BAC clones across the entire genome has triggered and accelerated the use of several platforms for analysis of DNA copy number changes, amongst others microarray comparative genomic hybridization (arrayCGH. One of the challenges inherent to this new technology is the management and analysis of large numbers of data points generated in each individual experiment. Results We have developed arrayCGHbase, a comprehensive analysis platform for arrayCGH experiments consisting of a MIAME (Minimal Information About a Microarray Experiment supportive database using MySQL underlying a data mining web tool, to store, analyze, interpret, compare, and visualize arrayCGH results in a uniform and user-friendly format. Following its flexible design, arrayCGHbase is compatible with all existing and forthcoming arrayCGH platforms. Data can be exported in a multitude of formats, including BED files to map copy number information on the genome using the Ensembl or UCSC genome browser. Conclusion ArrayCGHbase is a web based and platform independent arrayCGH data analysis tool, that allows users to access the analysis suite through the internet or a local intranet after installation on a private server. ArrayCGHbase is available at http://medgen.ugent.be/arrayCGHbase/.

  1. Some types of parent number talk count more than others: relations between parents' input and children's cardinal-number knowledge.

    Science.gov (United States)

    Gunderson, Elizabeth A; Levine, Susan C

    2011-09-01

    Before they enter preschool, children vary greatly in their numerical and mathematical knowledge, and this knowledge predicts their achievement throughout elementary school (e.g. Duncan et al., 2007; Ginsburg & Russell, 1981). Therefore, it is critical that we look to the home environment for parental inputs that may lead to these early variations. Recent work has shown that the amount of number talk that parents engage in with their children is robustly related to a critical aspect of mathematical development - cardinal-number knowledge (e.g. knowing that the word 'three' refers to sets of three entities; Levine, Suriyakham, Rowe, Huttenlocher & Gunderson, 2010). The present study characterizes the different types of number talk that parents produce and investigates which types are most predictive of children's later cardinal-number knowledge. We find that parents' number talk involving counting or labeling sets of present, visible objects is related to children's later cardinal-number knowledge, whereas other types of parent number talk are not. In addition, number talk that refers to large sets of present objects (i.e. sets of size 4 to 10 that fall outside children's ability to track individual objects) is more robustly predictive of children's later cardinal-number knowledge than talk about smaller sets. The relation between parents' number talk about large sets of present objects and children's cardinal-number knowledge remains significant even when controlling for factors such as parents' socioeconomic status and other measures of parents' number and non-number talk. © 2011 Blackwell Publishing Ltd.

  2. Numerical simulation of large deformation polycrystalline plasticity

    International Nuclear Information System (INIS)

    Inal, K.; Neale, K.W.; Wu, P.D.; MacEwen, S.R.

    2000-01-01

    A finite element model based on crystal plasticity has been developed to simulate the stress-strain response of sheet metal specimens in uniaxial tension. Each material point in the sheet is considered to be a polycrystalline aggregate of FCC grains. The Taylor theory of crystal plasticity is assumed. The numerical analysis incorporates parallel computing features enabling simulations of realistic models with large number of grains. Simulations have been carried out for the AA3004-H19 aluminium alloy and the results are compared with experimental data. (author)

  3. Fourier analysis in combinatorial number theory

    International Nuclear Information System (INIS)

    Shkredov, Il'ya D

    2010-01-01

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  4. Fourier analysis in combinatorial number theory

    Energy Technology Data Exchange (ETDEWEB)

    Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-09-16

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  5. Improved atom number with a dual color magneto—optical trap

    International Nuclear Information System (INIS)

    Cao Qiang; Luo Xin-Yu; Gao Kui-Yi; Wang Xiao-Rui; Wang Ru-Quan; Chen Dong-Min

    2012-01-01

    We demonstrate a novel dual color magneto—optical trap (MOT), which uses two sets of overlapping laser beams to cool and trap 87 Rb atoms. The volume of cold cloud in the dual color MOT is strongly dependent on the frequency difference of the laser beams and can be significantly larger than that in the normal MOT with single frequency MOT beams. Our experiment shows that the dual color MOT has the same loading rate as the normal MOT, but much longer loading time, leading to threefold increase in the number of trapped atoms. This indicates that the larger number is caused by reduced light induced loss. The dual color MOT is very useful in experiments where both high vacuum level and large atom number are required, such as single chamber quantum memory and Bose—Einstein condensation (BEC) experiments. Compared to the popular dark spontaneous-force optical trap (dark SPOT) technique, our approach is technically simpler and more suitable to low power laser systems. (rapid communication)

  6. Large Display Interaction Using Mobile Devices

    OpenAIRE

    Bauer, Jens

    2015-01-01

    Large displays become more and more popular, due to dropping prices. Their size and high resolution leverages collaboration and they are capable of dis- playing even large datasets in one view. This becomes even more interesting as the number of big data applications increases. The increased screen size and other properties of large displays pose new challenges to the Human- Computer-Interaction with these screens. This includes issues such as limited scalability to the number of users, diver...

  7. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  8. A revision of the subtract-with-borrow random number generators

    Science.gov (United States)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  9. Low-Reynolds number compressible flow around a triangular airfoil

    Science.gov (United States)

    Munday, Phillip; Taira, Kunihiko; Suwa, Tetsuya; Numata, Daiju; Asai, Keisuke

    2013-11-01

    We report on the combined numerical and experimental effort to analyze the nonlinear aerodynamics of a triangular airfoil in low-Reynolds number compressible flow that is representative of wings on future Martian air vehicles. The flow field around this airfoil is examined for a wide range of angles of attack and Mach numbers with three-dimensional direct numerical simulations at Re = 3000 . Companion experiments are conducted in a unique Martian wind tunnel that is placed in a vacuum chamber to simulate the Martian atmosphere. Computational findings are compared with pressure sensitive paint and direct force measurements and are found to be in agreement. The separated flow from the leading edge is found to form a large leading-edge vortex that sits directly above the apex of the airfoil and provides enhanced lift at post stall angles of attack. For higher subsonic flows, the vortical structures elongate in the streamwise direction resulting in reduced lift enhancement. We also observe that the onset of spanwise instability for higher angles of attack is delayed at lower Mach numbers. Currently at Mitsubishi Heavy Industries, Ltd., Nagasaki.

  10. Nuclear refugees after large radioactive releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Groell, Jérôme

    2016-01-01

    However improbable, large radioactive releases from a nuclear power plant would entail major consequences for the surrounding population. In Fukushima, 80,000 people had to evacuate the most contaminated areas around the NPP for a prolonged period of time. These people have been called “nuclear refugees”. The paper first argues that the number of nuclear refugees is a better measure of the severity of radiological consequences than the number of fatalities, although the latter is widely used to assess other catastrophic events such as earthquakes or tsunami. It is a valuable partial indicator in the context of comprehensive studies of overall consequences. Section 2 makes a clear distinction between long-term relocation and emergency evacuation and proposes a method to estimate the number of refugees. Section 3 examines the distribution of nuclear refugees with respect to weather and release site. The distribution is asymmetric and fat-tailed: unfavorable weather can lead to the contamination of large areas of land; large cities have in turn a higher probability of being contaminated. - Highlights: • Number of refugees is a good indicator of the severity of radiological consequences. • It is a better measure of the long-term consequences than the number of fatalities. • A representative meteorological sample should be sufficiently large. • The number of refugees highly depends on the release site in a country like France.

  11. Medicine in words and numbers: a cross-sectional survey comparing probability assessment scales

    Directory of Open Access Journals (Sweden)

    Koele Pieter

    2007-06-01

    Full Text Available Abstract Background In the complex domain of medical decision making, reasoning under uncertainty can benefit from supporting tools. Automated decision support tools often build upon mathematical models, such as Bayesian networks. These networks require probabilities which often have to be assessed by experts in the domain of application. Probability response scales can be used to support the assessment process. We compare assessments obtained with different types of response scale. Methods General practitioners (GPs gave assessments on and preferences for three different probability response scales: a numerical scale, a scale with only verbal labels, and a combined verbal-numerical scale we had designed ourselves. Standard analyses of variance were performed. Results No differences in assessments over the three response scales were found. Preferences for type of scale differed: the less experienced GPs preferred the verbal scale, the most experienced preferred the numerical scale, with the groups in between having a preference for the combined verbal-numerical scale. Conclusion We conclude that all three response scales are equally suitable for supporting probability assessment. The combined verbal-numerical scale is a good choice for aiding the process, since it offers numerical labels to those who prefer numbers and verbal labels to those who prefer words, and accommodates both more and less experienced professionals.

  12. Responses of arthropods to large-scale manipulations of dead wood in loblolly pine stands of the Southeastern United States

    Science.gov (United States)

    Michael D. Ulyshen; James L. Hanula

    2009-01-01

    Large-scale experimental manipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...

  13. Responses of Arthropods to large scale manipulations of dead wood in loblolly pine stands of the southeastern United States

    Science.gov (United States)

    Michael Ulyshen; James Hanula

    2009-01-01

    Large-scale experimentalmanipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...

  14. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  15. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  16. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  17. Asymptotic numbers, asymptotic functions and distributions

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1979-07-01

    The asymptotic functions are a new type of generalized functions. But they are not functionals on some space of test-functions as the distributions of Schwartz. They are mappings of the set denoted by A into A, where A is the set of the asymptotic numbers introduced by Christov. On its part A is a totally-ordered set of generalized numbers including the system of real numbers R as well as infinitesimals and infinitely large numbers. Every two asymptotic functions can be multiplied. On the other hand, the distributions have realizations as asymptotic functions in a certain sense. (author)

  18. Trading volume and the number of trades : a comparative study using high frequency data

    OpenAIRE

    Izzeldin, Marwan

    2007-01-01

    Trading volume and the number of trades are both used as proxies for market activity, with disagreement as to which is the better proxy for market activity. This paper investigates this issue using high frequency data for Cisco and Intel in 1997. A number of econometric methods are used, including GARCH augmented with lagged trading volume and number of trades, tests based on moment restrictions, regression analysis of volatility on volume and trades, normality of returns when standardized by...

  19. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  20. Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.

    Science.gov (United States)

    Baugh, Tiffany P; Franzese, Christine B

    2017-06-01

    Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.

  1. Large scale comparative codon-pair context analysis unveils general rules that fine-tune evolution of mRNA primary structure.

    Directory of Open Access Journals (Sweden)

    Gabriela Moura

    Full Text Available BACKGROUND: Codon usage and codon-pair context are important gene primary structure features that influence mRNA decoding fidelity. In order to identify general rules that shape codon-pair context and minimize mRNA decoding error, we have carried out a large scale comparative codon-pair context analysis of 119 fully sequenced genomes. METHODOLOGIES/PRINCIPAL FINDINGS: We have developed mathematical and software tools for large scale comparative codon-pair context analysis. These methodologies unveiled general and species specific codon-pair context rules that govern evolution of mRNAs in the 3 domains of life. We show that evolution of bacterial and archeal mRNA primary structure is mainly dependent on constraints imposed by the translational machinery, while in eukaryotes DNA methylation and tri-nucleotide repeats impose strong biases on codon-pair context. CONCLUSIONS: The data highlight fundamental differences between prokaryotic and eukaryotic mRNA decoding rules, which are partially independent of codon usage.

  2. Copy-Number Disorders Are a Common Cause of Congenital Kidney Malformations

    OpenAIRE

    Sanna-Cherchi, Simone; Kiryluk, Krzysztof; Burgess, Katelyn E.; Bodria, Monica; Sampson, Matthew G.; Hadley, Dexter; Nees, Shannon N.; Verbitsky, Miguel; Perry, Brittany J.; Sterken, Roel; Lozanovski, Vladimir J.; Materna-Kiryluk, Anna; Barlassina, Cristina; Kini, Akshata; Corbani, Valentina

    2012-01-01

    We examined the burden of large, rare, copy-number variants (CNVs) in 192 individuals with renal hypodysplasia (RHD) and replicated findings in 330 RHD cases from two independent cohorts. CNV distribution was significantly skewed toward larger gene-disrupting events in RHD cases compared to 4,733 ethnicity-matched controls (p = 4.8 × 10−11). This excess was attributable to known and novel (i.e., not present in any database or in the literature) genomic disorders. All together, 55/522 (10.5%) ...

  3. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  4. Large-scale urban point cloud labeling and reconstruction

    Science.gov (United States)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  5. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  6. Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers

    KAUST Repository

    Cheng, W.

    2017-11-27

    We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow

  7. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  8. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. The index-based subgraph matching algorithm (ISMA: fast subgraph enumeration in large networks using optimized search trees.

    Directory of Open Access Journals (Sweden)

    Sofie Demeyer

    Full Text Available Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA, a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/.

  10. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    Science.gov (United States)

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  11. Enhancer Identification through Comparative Genomics

    Energy Technology Data Exchange (ETDEWEB)

    Visel, Axel; Bristow, James; Pennacchio, Len A.

    2006-10-01

    With the availability of genomic sequence from numerousvertebrates, a paradigm shift has occurred in the identification ofdistant-acting gene regulatory elements. In contrast to traditionalgene-centric studies in which investigators randomly scanned genomicfragments that flank genes of interest in functional assays, the modernapproach begins electronically with publicly available comparativesequence datasets that provide investigators with prioritized lists ofputative functional sequences based on their evolutionary conservation.However, although a large number of tools and resources are nowavailable, application of comparative genomic approaches remains far fromtrivial. In particular, it requires users to dynamically consider thespecies and methods for comparison depending on the specific biologicalquestion under investigation. While there is currently no single generalrule to this end, it is clear that when applied appropriately,comparative genomic approaches exponentially increase our power ingenerating biological hypotheses for subsequent experimentaltesting.

  12. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  13. Fast integration using quasi-random numbers

    International Nuclear Information System (INIS)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-01-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples

  14. Fast integration using quasi-random numbers

    Science.gov (United States)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-04-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples.

  15. 2MASS Constraints on the Local Large-Scale Structure: A Challenge to LCDM?

    OpenAIRE

    Frith, W. J.; Shanks, T.; Outram, P. J.

    2004-01-01

    We investigate the large-scale structure of the local galaxy distribution using the recently completed 2 Micron All Sky Survey (2MASS). First, we determine the K-band number counts over the 4000 sq.deg. APM survey area where evidence for a large-scale `local hole' has previously been detected and compare them to a homogeneous prediction. Considering a LCDM form for the 2-point angular correlation function, the observed deficiency represents a 5 sigma fluctuation in the galaxy distribution. We...

  16. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  17. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  18. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  19. Application of quasi-random numbers for simulation

    International Nuclear Information System (INIS)

    Kazachenko, O.N.; Takhtamyshev, G.G.

    1985-01-01

    Application of the Monte-Carlo method for multidimensional integration is discussed. The main goal is to check the statement that the application of quasi-random numbers instead of regular pseudo-random numbers provides more rapid convergency. The Sobol, Richtmayer and Halton algorithms of quasi-random sequences are described. Over 50 tests to compare these quasi-random numbers as well as pseudo-random numbers were fulfilled. In all cases quasi-random numbers have clearly demonstrated a more rapid convergency as compared with pseudo-random ones. Positive test results on quasi-random trend in Monte-Carlo method seem very promising

  20. Microinstability Studies for the Large Helical Device

    International Nuclear Information System (INIS)

    Rewoldt, G.; Ku, L.-P.; Tang, W.M.; Sugama, H.; Nakajima, N.; Watanabe, K.Y.; Murakami, S.; Yamada, H.; Cooper, W.A.

    2002-01-01

    Fully kinetic assessments of the stability properties of toroidal drift modes have been obtained for cases for the Large Helical Device (LHD). This calculation employs the comprehensive linear microinstability code FULL, as recently extended for nonaxisymmetric systems. The code retains the important effects in the linearized gyrokinetic equation, using the lowest-order ''ballooning representation'' for high toroidal mode number instabilities in the electrostatic limit. These effects include trapped particles, FLR, transit and bounce and magnetic drift frequency resonances, etc., for any number of plasma species. Results for toroidal drift waves destabilized by trapped electrons and ion temperature gradients are presented, using numerically-calculated three-dimensional MHD equilibria. These are reconstructed from experimental measurements. Quasilinear fluxes of particles and energy for each species are also calculated. Pairs of LHD discharges with different magnetic axis positions and with and without pellet injection are compared

  1. Unparticle self-interactions at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Bergstroem, Johannes; Ohlsson, Tommy

    2009-01-01

    We investigate the effect of unparticle self-interactions at the Large Hadron Collider (LHC). Especially, we discuss the three-point correlation function, which is determined by conformal symmetry up to a constant, and study its relation to processes with four-particle final states. These processes could be used as a favorable way to look for unparticle physics, and for weak enough couplings to the standard model, even the only way. We find updated upper bounds on the cross sections for unparticle-mediated 4γ final states at the LHC and novel upper bounds for the corresponding 2γ2l and 4l final states. The size of the allowed cross sections obtained are comparably large for large values of the scaling dimension of the unparticle sector, but they decrease with decreasing values of this parameter. In addition, we present relevant distributions for the different final states, enabling the possible identification of the unparticle scaling dimension if there was to be a large number of events of such final states at the LHC.

  2. Comparative Performance in Single-Port Versus Multiport Minimally Invasive Surgery, and Small Versus Large Operative Working Spaces: A Preclinical Randomized Crossover Trial.

    Science.gov (United States)

    Marcus, Hani J; Seneci, Carlo A; Hughes-Hallett, Archie; Cundy, Thomas P; Nandi, Dipankar; Yang, Guang-Zhong; Darzi, Ara

    2016-04-01

    Surgical approaches such as transanal endoscopic microsurgery, which utilize small operative working spaces, and are necessarily single-port, are particularly demanding with standard instruments and have not been widely adopted. The aim of this study was to compare simultaneously surgical performance in single-port versus multiport approaches, and small versus large working spaces. Ten novice, 4 intermediate, and 1 expert surgeons were recruited from a university hospital. A preclinical randomized crossover study design was implemented, comparing performance under the following conditions: (1) multiport approach and large working space, (2) multiport approach and intermediate working space, (3) single-port approach and large working space, (4) single-port approach and intermediate working space, and (5) single-port approach and small working space. In each case, participants performed a peg transfer and pattern cutting tasks, and each task repetition was scored. Intermediate and expert surgeons performed significantly better than novices in all conditions (P Performance in single-port surgery was significantly worse than multiport surgery (P performance in the intermediate versus large working space. In single-port surgery, there was a converse trend; performances in the intermediate and small working spaces were significantly better than in the large working space. Single-port approaches were significantly more technically challenging than multiport approaches, possibly reflecting loss of instrument triangulation. Surprisingly, in single-port approaches, in which triangulation was no longer a factor, performance in large working spaces was worse than in intermediate and small working spaces. © The Author(s) 2015.

  3. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  4. Eating in the absence of hunger in adolescents: intake after a large-array meal compared with that after a standardized meal.

    Science.gov (United States)

    Shomaker, Lauren B; Tanofsky-Kraff, Marian; Zocca, Jaclyn M; Courville, Amber; Kozlosky, Merel; Columbo, Kelli M; Wolkoff, Laura E; Brady, Sheila M; Crocker, Melissa K; Ali, Asem H; Yanovski, Susan Z; Yanovski, Jack A

    2010-10-01

    Eating in the absence of hunger (EAH) is typically assessed by measuring youths' intake of palatable snack foods after a standard meal designed to reduce hunger. Because energy intake required to reach satiety varies among individuals, a standard meal may not ensure the absence of hunger among participants of all weight strata. The objective of this study was to compare adolescents' EAH observed after access to a very large food array with EAH observed after a standardized meal. Seventy-eight adolescents participated in a randomized crossover study during which EAH was measured as intake of palatable snacks after ad libitum access to a very large array of lunch-type foods (>10,000 kcal) and after a lunch meal standardized to provide 50% of the daily estimated energy requirements. The adolescents consumed more energy and reported less hunger after the large-array meal than after the standardized meal (P values kcal less EAH after the large-array meal than after the standardized meal (295 ± 18 compared with 365 ± 20 kcal; P < 0.001), but EAH intakes after the large-array meal and after the standardized meal were positively correlated (P values < 0.001). The body mass index z score and overweight were positively associated with EAH in both paradigms after age, sex, race, pubertal stage, and meal intake were controlled for (P values ≤ 0.05). EAH is observable and positively related to body weight regardless of whether youth eat in the absence of hunger from a very large-array meal or from a standardized meal. This trial was registered at clinicaltrials.gov as NCT00631644.

  5. Comparative performance of modern digital mammography systems in a large breast screening program

    Energy Technology Data Exchange (ETDEWEB)

    Yaffe, Martin J., E-mail: martin.yaffe@sri.utoronto.ca; Bloomquist, Aili K.; Hunter, David M.; Mawdsley, Gordon E. [Physical Sciences Division, Sunnybrook Research Institute, Departments of Medical Biophysics and Medical Imaging, University of Toronto, Ontario M4N 3M5 (Canada); Chiarelli, Anna M. [Prevention and Cancer Control, Cancer Care Ontario, Dalla Lana School of Public Health, University of Toronto, Ontario M4N 3M5, Canada and Ontario Breast Screening Program, Cancer Care Ontario, Toronto, Ontario M5G 1X3 (Canada); Muradali, Derek [Ontario Breast Screening Program, Cancer Care Ontario, Toronto, Ontario M5G 1X3 (Canada); Mainprize, James G. [Physical Sciences Division, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada)

    2013-12-15

    Purpose: To compare physical measures pertaining to image quality among digital mammography systems utilized in a large breast screening program. To examine qualitatively differences in these measures and differences in clinical cancer detection rates between CR and DR among sites within that program. Methods: As part of the routine quality assurance program for screening, field measurements are made of several variables considered to correlate with the diagnostic quality of medical images including: modulation transfer function, noise equivalent quanta, d′ (an index of lesion detectability) and air kerma to allow estimation of mean glandular dose. In addition, images of the mammography accreditation phantom are evaluated. Results: It was found that overall there were marked differences between the performance measures of DR and CR mammography systems. In particular, the modulation transfer functions obtained with the DR systems were found to be higher, even for larger detector element sizes. Similarly, the noise equivalent quanta, d′, and the phantom scores were higher, while the failure rates associated with low signal-to-noise ratio and high dose were lower with DR. These results were consistent with previous findings in the authors’ program that the breast cancer detection rates at sites employing CR technology were, on average, 30.6% lower than those that used DR mammography. Conclusions: While the clinical study was not large enough to allow a statistically powered system-by-system assessment of cancer detection accuracy, the physical measures expressing spatial resolution, and signal-to-noise ratio are consistent with the published finding that sites employing CR systems had lower cancer detection rates than those using DR systems for screening mammography.

  6. Large Gain in Air Quality Compared to an Alternative Anthropogenic Emissions Scenario

    Science.gov (United States)

    Daskalakis, Nikos; Tsigaridis, Kostas; Myriokefalitakis, Stelios; Fanourgakis, George S.; Kanakidou, Maria

    2016-01-01

    During the last 30 years, significant effort has been made to improve air quality through legislation for emissions reduction. Global three-dimensional chemistrytransport simulations of atmospheric composition over the past 3 decades have been performed to estimate what the air quality levels would have been under a scenario of stagnation of anthropogenic emissions per capita as in 1980, accounting for the population increase (BA1980) or using the standard practice of neglecting it (AE1980), and how they compare to the historical changes in air quality levels. The simulations are based on assimilated meteorology to account for the yearto- year observed climate variability and on different scenarios of anthropogenic emissions of pollutants. The ACCMIP historical emissions dataset is used as the starting point. Our sensitivity simulations provide clear indications that air quality legislation and technology developments have limited the rapid increase of air pollutants. The achieved reductions in concentrations of nitrogen oxides, carbon monoxide, black carbon, and sulfate aerosols are found to be significant when comparing to both BA1980 and AE1980 simulations that neglect any measures applied for the protection of the environment. We also show the potentially large tropospheric air quality benefit from the development of cleaner technology used by the growing global population. These 30-year hindcast sensitivity simulations demonstrate that the actual benefit in air quality due to air pollution legislation and technological advances is higher than the gain calculated by a simple comparison against a constant anthropogenic emissions simulation, as is usually done. Our results also indicate that over China and India the beneficial technological advances for the air quality may have been masked by the explosive increase in local population and the disproportional increase in energy demand partially due to the globalization of the economy.

  7. Large gain in air quality compared to an alternative anthropogenic emissions scenario

    Directory of Open Access Journals (Sweden)

    N. Daskalakis

    2016-08-01

    Full Text Available During the last 30 years, significant effort has been made to improve air quality through legislation for emissions reduction. Global three-dimensional chemistry-transport simulations of atmospheric composition over the past 3 decades have been performed to estimate what the air quality levels would have been under a scenario of stagnation of anthropogenic emissions per capita as in 1980, accounting for the population increase (BA1980 or using the standard practice of neglecting it (AE1980, and how they compare to the historical changes in air quality levels. The simulations are based on assimilated meteorology to account for the year-to-year observed climate variability and on different scenarios of anthropogenic emissions of pollutants. The ACCMIP historical emissions dataset is used as the starting point. Our sensitivity simulations provide clear indications that air quality legislation and technology developments have limited the rapid increase of air pollutants. The achieved reductions in concentrations of nitrogen oxides, carbon monoxide, black carbon, and sulfate aerosols are found to be significant when comparing to both BA1980 and AE1980 simulations that neglect any measures applied for the protection of the environment. We also show the potentially large tropospheric air quality benefit from the development of cleaner technology used by the growing global population. These 30-year hindcast sensitivity simulations demonstrate that the actual benefit in air quality due to air pollution legislation and technological advances is higher than the gain calculated by a simple comparison against a constant anthropogenic emissions simulation, as is usually done. Our results also indicate that over China and India the beneficial technological advances for the air quality may have been masked by the explosive increase in local population and the disproportional increase in energy demand partially due to the globalization of the economy.

  8. Heart failure in numbers: Estimates for the 21st century in Portugal.

    Science.gov (United States)

    Fonseca, Cândida; Brás, Daniel; Araújo, Inês; Ceia, Fátima

    2018-02-01

    Heart failure is a major public health problem that affects a large number of individuals and is associated with high mortality and morbidity. This study aims to estimate the probable scenario for HF prevalence and its consequences in the short-, medium- and long-term in Portugal. This assessment is based on the EPICA (Epidemiology of Heart Failure and Learning) project, which was designed to estimate the prevalence of chronic heart failure in mainland Portugal in 1998. Estimates of heart failure prevalence were performed for individuals aged over 25 years, distributed by age group and gender, based on data from the 2011 Census by Statistics Portugal. The expected demographic changes, particularly the marked aging of the population, mean that a large number of Portuguese will likely be affected by this syndrome. Assuming that current clinical practices are maintained, the prevalence of heart failure in mainland Portugal will increase by 30% by 2035 and by 33% by 2060, compared to 2011, resulting in 479 921 and 494 191 affected individuals, respectively. In addition to the large number of heart failure patients expected, it is estimated that the hospitalizations and mortality associated with this syndrome will significantly increase its economic impact. Therefore, it is extremely important to raise awareness of this syndrome, as this will favor diagnosis and early referral of patients, facilitating better management of heart failure and helping to decrease the burden it imposes on Portugal. Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Natural Alternatives to Natural Number: The Case of Ratio

    Directory of Open Access Journals (Sweden)

    Percival G. Matthews

    2018-06-01

    Full Text Available The overwhelming majority of efforts to cultivate early mathematical thinking rely primarily on counting and associated natural number concepts. Unfortunately, natural numbers and discretized thinking do not align well with a large swath of the mathematical concepts we wish for children to learn. This misalignment presents an important impediment to teaching and learning. We suggest that one way to circumvent these pitfalls is to leverage students’ non-numerical experiences that can provide intuitive access to foundational mathematical concepts. Specifically, we advocate for explicitly leveraging a students’ perceptually based intuitions about quantity and b students’ reasoning about change and variation, and we address the affordances offered by this approach. We argue that it can support ways of thinking that may at times align better with to-be-learned mathematical ideas, and thus may serve as a productive alternative for particular mathematical concepts when compared to number. We illustrate this argument using the domain of ratio, and we do so from the distinct disciplinary lenses we employ respectively as a cognitive psychologist and as a mathematics education researcher. Finally, we discuss the potential for productive synthesis given the substantial differences in our preferred methods and general epistemologies.

  10. Relationships between number and space processing in adults with and without dyscalculia.

    Science.gov (United States)

    Mussolin, Christophe; Martin, Romain; Schiltz, Christine

    2011-09-01

    A large body of evidence indicates clear relationships between number and space processing in healthy and brain-damaged adults, as well as in children. The present paper addressed this issue regarding atypical math development. Adults with a diagnosis of dyscalculia (DYS) during childhood were compared to adults with average or high abilities in mathematics across two bisection tasks. Participants were presented with Arabic number triplets and had to judge either the number magnitude or the spatial location of the middle number relative to the two outer numbers. For the numerical judgment, adults with DYS were slower than both groups of control peers. They were also more strongly affected by the factors related to number magnitude such as the range of the triplets or the distance between the middle number and the real arithmetical mean. By contrast, adults with DYS were as accurate and fast as adults who never experienced math disability when they had to make a spatial judgment. Moreover, number-space congruency affected performance similarly in the three experimental groups. These findings support the hypothesis of a deficit of number magnitude representation in DYS with a relative preservation of some spatial mechanisms in DYS. Results are discussed in terms of direct and indirect number-space interactions. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  12. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  13. Prandtl-number Effects in High-Rayleigh-number Spherical Convection

    Science.gov (United States)

    Orvedahl, Ryan J.; Calkins, Michael A.; Featherstone, Nicholas A.; Hindman, Bradley W.

    2018-03-01

    Convection is the predominant mechanism by which energy and angular momentum are transported in the outer portion of the Sun. The resulting overturning motions are also the primary energy source for the solar magnetic field. An accurate solar dynamo model therefore requires a complete description of the convective motions, but these motions remain poorly understood. Studying stellar convection numerically remains challenging; it occurs within a parameter regime that is extreme by computational standards. The fluid properties of the convection zone are characterized in part by the Prandtl number \\Pr = ν/κ, where ν is the kinematic viscosity and κ is the thermal diffusion; in stars, \\Pr is extremely low, \\Pr ≈ 10‑7. The influence of \\Pr on the convective motions at the heart of the dynamo is not well understood since most numerical studies are limited to using \\Pr ≈ 1. We systematically vary \\Pr and the degree of thermal forcing, characterized through a Rayleigh number, to explore its influence on the convective dynamics. For sufficiently large thermal driving, the simulations reach a so-called convective free-fall state where diffusion no longer plays an important role in the interior dynamics. Simulations with a lower \\Pr generate faster convective flows and broader ranges of scales for equivalent levels of thermal forcing. Characteristics of the spectral distribution of the velocity remain largely insensitive to changes in \\Pr . Importantly, we find that \\Pr plays a key role in determining when the free-fall regime is reached by controlling the thickness of the thermal boundary layer.

  14. Neglect Impairs Explicit Processing of the Mental Number Line

    Science.gov (United States)

    Zorzi, Marco; Bonato, Mario; Treccani, Barbara; Scalambrin, Giovanni; Marenzi, Roberto; Priftis, Konstantinos

    2012-01-01

    Converging evidence suggests that visuospatial attention plays a pivotal role in numerical processing, especially when the task involves the manipulation of numerical magnitudes. Visuospatial neglect impairs contralesional attentional orienting not only in perceptual but also in numerical space. Indeed, patients with left neglect show a bias toward larger numbers when mentally bisecting a numerical interval, as if they were neglecting its leftmost part. In contrast, their performance in parity judgments is unbiased, suggesting a dissociation between explicit and implicit processing of numerical magnitude. Here we further investigate the consequences of these visuospatial attention impairments on numerical processing and their interaction with task demands. Patients with right hemisphere damage, with and without left neglect, were administered both a number comparison and a parity judgment task that had identical stimuli and response requirements. Neglect patients’ performance was normal in the parity task, when processing of numerical magnitude was implicit, whereas they showed characteristic biases in the number comparison task, when access to numerical magnitude was explicit. Compared to patients without neglect, they showed an asymmetric distance effect, with slowing of the number immediately smaller than (i.e., to the left of) the reference and a stronger SNARC effect, particularly for large numbers. The latter might index an exaggerated effect of number-space compatibility after ipsilesional (i.e., rightward) orienting in number space. Thus, the effect of neglect on the explicit processing of numerical magnitude can be understood in terms of both a failure to orient to smaller (i.e., contralesional) magnitudes and a difficulty to disengage from larger (i.e., ipsilesional) magnitudes on the number line, which resembles the disrupted pattern of attention orienting in visual space. PMID:22661935

  15. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  16. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  17. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  18. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. Large area optical mapping of surface contact angle.

    Science.gov (United States)

    Dutra, Guilherme; Canning, John; Padden, Whayne; Martelli, Cicero; Dligatch, Svetlana

    2017-09-04

    Top-down contact angle measurements have been validated and confirmed to be as good if not more reliable than side-based measurements. A range of samples, including industrially relevant materials for roofing and printing, has been compared. Using the top-down approach, mapping in both 1-D and 2-D has been demonstrated. The method was applied to study the change in contact angle as a function of change in silver (Ag) nanoparticle size controlled by thermal evaporation. Large area mapping reveals good uniformity for commercial Aspen paper coated with black laser printer ink. A demonstration of the forensic and chemical analysis potential in 2-D is shown by uncovering the hidden CsF initials made with mineral oil on the coated Aspen paper. The method promises to revolutionize nanoscale characterization and industrial monitoring as well as chemical analyses by allowing rapid contact angle measurements over large areas or large numbers of samples in ways and times that have not been possible before.

  20. The Role of Large Enterprises in Museum Digitization

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2014-12-01

    Full Text Available By actively promoting museum digitalization, Japan finds an idiosyncratic way to museum digitalization. The mode of collaboration, in which the government plays a leading role while large enterprises’ R&D capabilities and museum’s cultural dynamics are both allowed to give full play, turns these powerful enterprises into the solid backing of museum digitalization, provides a concrete solution to the common financial and technical challenges museums face in the process. In the course of such collaboration, large enterprises succeed in cultivating a number of talents who understand both business world and museum operation. Thanks to their experiences in the business world, compared with museum professionals, they play a more vital role in marketing the potential commercial exploitation of related digital technologies. The benefits large enterprises could possibly gain from such mode of collaboration - realizing social values, enhancing corporate image, for instance - help to motivate their active involvement, thereby forming a positive cycle of sustainable development.

  1. From Calculus to Number Theory

    Indian Academy of Sciences (India)

    A. Raghuram

    2016-11-04

    Nov 4, 2016 ... diverges to infinity. This means given any number M, however large, we can add sufficiently many terms in the above series to make the sum larger than M. This was first proved by Nicole Oresme (1323-1382), a brilliant. French philosopher of his times.

  2. Design and Fabrication of 3D printed Scaffolds with a Mechanical Strength Comparable to Cortical Bone to Repair Large Bone Defects

    OpenAIRE

    Roohani-Esfahani, Seyed-Iman; Newman, Peter; Zreiqat, Hala

    2016-01-01

    A challenge in regenerating large bone defects under load is to create scaffolds with large and interconnected pores while providing a compressive strength comparable to cortical bone (100?150?MPa). Here we design a novel hexagonal architecture for a glass-ceramic scaffold to fabricate an anisotropic, highly porous three dimensional scaffolds with a compressive strength of 110?MPa. Scaffolds with hexagonal design demonstrated a high fatigue resistance (1,000,000 cycles at 1?10?MPa compressive...

  3. Of colored numbers and numbered colors: interactive processes in grapheme-color synesthesia.

    Science.gov (United States)

    Gebuis, Titia; Nijboer, Tanja C W; van der Smagt, Maarten J

    2009-01-01

    Grapheme-color synesthetes experience a specific color when they see a grapheme but they do not report to perceive a grapheme when a color is presented. In this study, we investigate whether color can still evoke number-processes even when a vivid number experience is absent. We used color-number and number-color priming, both revealing faster responses in congruent compared to incongruent conditions. Interestingly, the congruency effect was of similar magnitude for both conditions, and a numerical distance effect was present only in the color-number priming task. In addition, a priming task in which synesthetes had to judge the parity of a colored number revealed faster responses in parity congruent than in parity incongruent trials. These combined results demonstrate that synesthesia is indeed bi-directional and of similar strength in both directions. Furthermore, they illustrate the precise nature of these interactions and show that the direction of these interactions is determined by task demands, not by the more vividly experienced aspect of the stimulus.

  4. Inclusive spectra of mesons with large transverse momenta in proton-nuclear collisions at high energies

    International Nuclear Information System (INIS)

    Lykasov, G.I.; Sherkhonov, B.Kh.

    1982-01-01

    Basing on the proposed earlier quark model of hadron-nucleus processes with large transverse momenta psub(perpendicular) the spectra of π +- , K +- meson production with large psub(perpendicular) in proton-nucleus collisions at high energies are calculated. The performed comparison of their dependence of the nucleus-target atomic number A with experimental data shows a good agreement. Theoretical and experimental ratios of inclusive spectra of K +- and π +- mesons in the are compared. Results of calculations show a rather good description of experimental data on large psub(perpendicular) meson production at high energies

  5. Influences of mach number and flow incidence on aerodynamic losses of steam turbine blade

    International Nuclear Information System (INIS)

    Yoo, Seok Jae; Ng, Wing Fai

    2000-01-01

    An experiment was conducted to investigate the aerodynamic losses of high pressure steam turbine nozzle (526A) subjected to a large range of incident angles (-34 .deg. to 26 .deg. ) and exit Mach numbers (0.6 and 1.15). Measurements included downstream pitot probe traverses, upstream total pressure, and endwall static pressures. Flow visualization techniques such as shadowgraph and color oil flow visualization were performed to complement the measured data. When the exit Mach number for nozzles increased from 0.9 to 1.1 the total pressure loss coefficient increased by a factor of 7 as compared to the total pressure losses measured at subsonic conditions (M 2 <0.9). For the range of incidence tested, the effect of flow incidence on the total pressure losses is less pronounced. Based on the shadowgraphs taken during the experiment, it's believed that the large increase in losses at transonic conditions is due to strong shock/ boundary layer interaction that may lead to flow separation on the blade suction surface

  6. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  7. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  8. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Williams, Benjamin B.; Swartz, Harold M.; Nicolalde, Roberto J.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100-1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. (authors)

  9. Interaction between numbers and size during visual search

    NARCIS (Netherlands)

    Krause, F.; Bekkering, H.; Pratt, J.; Lindemann, O.

    2017-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit

  10. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    Science.gov (United States)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  11. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    Science.gov (United States)

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  12. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  13. Large SNP arrays for genotyping in crop plants

    Indian Academy of Sciences (India)

    Genotyping with large numbers of molecular markers is now an indispensable tool within plant genetics and breeding. Especially through the identification of large numbers of single nucleotide polymorphism (SNP) markers using the novel high-throughput sequencing technologies, it is now possible to reliably identify many ...

  14. Effects of Dimple Depth and Reynolds Number on the Flow and Heat Transfer in a Dimpled Channel

    International Nuclear Information System (INIS)

    Ahn, Joon; Lee, Young Ok; Lee, Joon Sik

    2007-01-01

    A Large Eddy Simulation (LES) has been conducted for the flow and heat transfer in a dimpled channel. Two dimple depths of 0.2 and 0.3 times of the dimple print diameter (= D) have been compared at the bulk Reynolds number of 20,000. Three Reynolds numbers of 5,000, 10,000 and 20,000 have been studied, while the dimple depth is kept as 0.2 D. With the deeper dimple, the flow reattachment occurs father downstream inside the dimple, so that the heat transfer is not as effectively enhanced as the case with shallow ones. At the low Reynolds number of 5,000, the Nusselt number ratio is as high as those for the higher Reynolds number, although the value of heat transfer coefficient decreases because of the weak shear layer vortices

  15. MPQS with three large primes

    NARCIS (Netherlands)

    Leyland, P.; Lenstra, A.K.; Dodson, B.; Muffett, A.; Wagstaff, S.; Fieker, C.; Kohel, D.R.

    2002-01-01

    We report the factorization of a 135-digit integer by the triple-large-prime variation of the multiple polynomial quadratic sieve. Previous workers [6][10] had suggested that using more than two large primes would be counterproductive, because of the greatly increased number of false reports from

  16. Design and Fabrication of 3D printed Scaffolds with a Mechanical Strength Comparable to Cortical Bone to Repair Large Bone Defects

    Science.gov (United States)

    Roohani-Esfahani, Seyed-Iman; Newman, Peter; Zreiqat, Hala

    2016-01-01

    A challenge in regenerating large bone defects under load is to create scaffolds with large and interconnected pores while providing a compressive strength comparable to cortical bone (100-150 MPa). Here we design a novel hexagonal architecture for a glass-ceramic scaffold to fabricate an anisotropic, highly porous three dimensional scaffolds with a compressive strength of 110 MPa. Scaffolds with hexagonal design demonstrated a high fatigue resistance (1,000,000 cycles at 1-10 MPa compressive cyclic load), failure reliability and flexural strength (30 MPa) compared with those for conventional architecture. The obtained strength is 150 times greater than values reported for polymeric and composite scaffolds and 5 times greater than reported values for ceramic and glass scaffolds at similar porosity. These scaffolds open avenues for treatment of load bearing bone defects in orthopaedic, dental and maxillofacial applications.

  17. Evaluation of Large-Scale Public-Sector Reforms: A Comparative Analysis

    Science.gov (United States)

    Breidahl, Karen N.; Gjelstrup, Gunnar; Hansen, Hanne Foss; Hansen, Morten Balle

    2017-01-01

    Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our…

  18. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  19. Improving the computation efficiency of COBRA-TF for LWR safety analysis of large problems

    International Nuclear Information System (INIS)

    Cuervo, D.; Avramova, M. N.; Ivanov, K. N.

    2004-01-01

    A matrix solver is implemented in COBRA-TF in order to improve the computation efficiency of both numerical solution methods existing in the code, the Gauss elimination and the Gauss-Seidel iterative technique. Both methods are used to solve the system of pressure linear equations and relay on the solution of large sparse matrices. The introduced solver accelerates the solution of these matrices in cases of large number of cells. The execution time is reduced in half as compared to the execution time without using matrix solver for the cases with large matrices. The achieved improvement and the planned future work in this direction are important for performing efficient LWR safety analyses of large problems. (authors)

  20. Gravity Cutoff in Theories with Large Discrete Symmetries

    International Nuclear Information System (INIS)

    Dvali, Gia; Redi, Michele; Sibiryakov, Sergey; Vainshtein, Arkady

    2008-01-01

    We set an upper bound on the gravitational cutoff in theories with exact quantum numbers of large N periodicity, such as Z N discrete symmetries. The bound stems from black hole physics. It is similar to the bound appearing in theories with N particle species, though a priori, a large discrete symmetry does not imply a large number of species. Thus, there emerges a potentially wide class of new theories that address the hierarchy problem by lowering the gravitational cutoff due to the existence of large Z 10 32 -type symmetries

  1. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  2. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  3. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  4. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  5. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    Science.gov (United States)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  6. Number Line Estimation: The Use of Number Line Magnitude Estimation to Detect the Presence of Math Disability in Postsecondary Students

    Science.gov (United States)

    McDonald, Steven A.

    2010-01-01

    This study arose from an interest in the possible presence of mathematics disabilities among students enrolled in the developmental math program at a large university in the Mid-Atlantic region. Research in mathematics learning disabilities (MLD) has included a focus on the construct of working memory and number sense. A component of number sense…

  7. Statin eligibility and cardiovascular risk burden assessed by coronary artery calcium score: comparing the two guidelines in a large Korean cohort.

    Science.gov (United States)

    Rhee, Eun-Jung; Park, Se Eun; Oh, Hyung Geun; Park, Cheol-Young; Oh, Ki-Won; Park, Sung-Woo; Blankstein, Ron; Plutzky, Jorge; Lee, Won-Young

    2015-05-01

    To investigate the statin eligibility and the predictabilities for cardiovascular disease between AHA/ACC and ATPIII guidelines, comparing those results to concomitant coronary artery calcium scores (CACS) in a large cohort of Korean individuals who met statin-eligibility criteria. Among 19,920 participants in a health screening program, eligibility for statin treatment was assessed by the two guidelines. The presence and extent of coronary artery calcification (CAC) was measured by multi-detector computed tomography and compared among the various groups defined by the two guidelines. Applying the new ACC/AHA guideline to the health screening cohort increased the statin-eligible population from 18.7% (as defined by ATP III) to 21.7%. Statin-eligible subjects as defined only by ACC/AHA guideline manifested a higher proportion of subjects with CAC compared with those meeting only ATP-III criteria even after adjustment for age and sex (47.1 vs. 33.8%, pguideline showed higher odds ratio for the presence of CACS>0 compared with those meeting ATP-III criteria {3.493 (3.245∼3.759) vs. 2.865 (2.653∼3.094)}, which was attenuated after adjusted for age and sex. In this large Korean cohort, more subjects would have qualified for statin initiation under the new ACC/AHA guideline as compared with the proportion recommended for statin treatment by ATP III guideline. Among statin-eligible Korean health screening subjects, the new ACC/AHA guideline identified a greater extent of atherosclerosis as assessed by CACS as compared to ATP III guideline assessment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Strategies in filtering in the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2000-01-01

    textabstractA critical step when factoring large integers by the Number Field Sieve consists of finding dependencies in a huge sparse matrix over the field GF(2), using a Block Lanczos algorithm. Both size and weight (the number of non-zero elements) of the matrix critically affect the running time

  9. Comparative genome analysis identifies two large deletions in the genome of highly-passaged attenuated Streptococcus agalactiae strain YM001 compared to the parental pathogenic strain HN016.

    Science.gov (United States)

    Wang, Rui; Li, Liping; Huang, Yan; Luo, Fuguang; Liang, Wanwen; Gan, Xi; Huang, Ting; Lei, Aiying; Chen, Ming; Chen, Lianfu

    2015-11-04

    Streptococcus agalactiae (S. agalactiae), also known as group B Streptococcus (GBS), is an important pathogen for neonatal pneumonia, meningitis, bovine mastitis, and fish meningoencephalitis. The global outbreaks of Streptococcus disease in tilapia cause huge economic losses and threaten human food hygiene safety as well. To investigate the mechanism of S. agalactiae pathogenesis in tilapia and develop attenuated S. agalactiae vaccine, this study sequenced and comparatively analyzed the whole genomes of virulent wild-type S. agalactiae strain HN016 and its highly-passaged attenuated strain YM001 derived from tilapia. We performed Illumina sequencing of DNA prepared from strain HN016 and YM001. Sequencedreads were assembled and nucleotide comparisons, single nucleotide polymorphism (SNP) , indels were analyzed between the draft genomes of HN016 and YM001. Clustered regularly interspaced short palindromic repeats (CRISPRs) and prophage were detected and analyzed in different S. agalactiae strains. The genome of S. agalactiae YM001 was 2,047,957 bp with a GC content of 35.61 %; it contained 2044 genes and 88 RNAs. Meanwhile, the genome of S. agalactiae HN016 was 2,064,722 bp with a GC content of 35.66 %; it had 2063 genes and 101 RNAs. Comparative genome analysis indicated that compared with HN016, YM001 genome had two significant large deletions, at the sizes of 5832 and 11,116 bp respectively, resulting in the deletion of three rRNA and ten tRNA genes, as well as the deletion and functional damage of ten genes related to metabolism, transport, growth, anti-stress, etc. Besides these two large deletions, other ten deletions and 28 single nucleotide variations (SNVs) were also identified, mainly affecting the metabolism- and growth-related genes. The genome of attenuated S. agalactiae YM001 showed significant variations, resulting in the deletion of 10 functional genes, compared to the parental pathogenic strain HN016. The deleted and mutated functional genes all

  10. A comparative study of the number and mass of fine particles emitted with diesel fuel and marine gas oil (MGO)

    Science.gov (United States)

    Nabi, Md. Nurun; Brown, Richard J.; Ristovski, Zoran; Hustad, Johan Einar

    2012-09-01

    The current investigation reports on diesel particulate matter emissions, with special interest in fine particles from the combustion of two base fuels. The base fuels selected were diesel fuel and marine gas oil (MGO). The experiments were conducted with a four-stroke, six-cylinder, direct injection diesel engine. The results showed that the fine particle number emissions measured by both SMPS and ELPI were higher with MGO compared to diesel fuel. It was observed that the fine particle number emissions with the two base fuels were quantitatively different but qualitatively similar. The gravimetric (mass basis) measurement also showed higher total particulate matter (TPM) emissions with the MGO. The smoke emissions, which were part of TPM, were also higher for the MGO. No significant changes in the mass flow rate of fuel and the brake-specific fuel consumption (BSFC) were observed between the two base fuels.

  11. Primary care COPD patients compared with large pharmaceutically-sponsored COPD studies: an UNLOCK validation study.

    Directory of Open Access Journals (Sweden)

    Annemarije L Kruis

    Full Text Available BACKGROUND: Guideline recommendations for chronic obstructive pulmonary disease (COPD are based on the results of large pharmaceutically-sponsored COPD studies (LPCS. There is a paucity of data on disease characteristics at the primary care level, while the majority of COPD patients are treated in primary care. OBJECTIVE: We aimed to evaluate the external validity of six LPCS (ISOLDE, TRISTAN, TORCH, UPLIFT, ECLIPSE, POET-COPD on which current guidelines are based, in relation to primary care COPD patients, in order to inform future clinical practice guidelines and trials. METHODS: Baseline data of seven primary care databases (n=3508 from Europe were compared to baseline data of the LPCS. In addition, we examined the proportion of primary care patients eligible to participate in the LPCS, based on inclusion criteria. RESULTS: Overall, patients included in the LPCS were younger (mean difference (MD-2.4; p=0.03, predominantly male (MD 12.4; p=0.1 with worse lung function (FEV1% MD -16.4; p<0.01 and worse quality of life scores (SGRQ MD 15.8; p=0.01. There were large differences in GOLD stage distribution compared to primary care patients. Mean exacerbation rates were higher in LPCS, with an overrepresentation of patients with ≥ 1 and ≥ 2 exacerbations, although results were not statistically significant. Our findings add to the literature, as we revealed hitherto unknown GOLD I exacerbation characteristics, showing 34% of mild patients had ≥ 1 exacerbations per year and 12% had ≥ 2 exacerbations per year. The proportion of primary care patients eligible for inclusion in LPCS ranged from 17% (TRISTAN to 42% (ECLIPSE, UPLIFT. CONCLUSION: Primary care COPD patients stand out from patients enrolled in LPCS in terms of gender, lung function, quality of life and exacerbations. More research is needed to determine the effect of pharmacological treatment in mild to moderate patients. We encourage future guideline makers to involve primary care

  12. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    Directory of Open Access Journals (Sweden)

    Julia Siemann

    2018-04-01

    Full Text Available The clinical profile termed developmental dyscalculia (DD is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD.

  13. Innate or Acquired? – Disentangling Number Sense and Early Number Competencies

    Science.gov (United States)

    Siemann, Julia; Petermann, Franz

    2018-01-01

    The clinical profile termed developmental dyscalculia (DD) is a fundamental disability affecting children already prior to arithmetic schooling, but the formal diagnosis is often only made during school years. The manifold associated deficits depend on age, education, developmental stage, and task requirements. Despite a large body of studies, the underlying mechanisms remain dubious. Conflicting findings have stimulated opposing theories, each presenting enough empirical support to remain a possible alternative. A so far unresolved question concerns the debate whether a putative innate number sense is required for successful arithmetic achievement as opposed to a pure reliance on domain-general cognitive factors. Here, we outline that the controversy arises due to ambiguous conceptualizations of the number sense. It is common practice to use early number competence as a proxy for innate magnitude processing, even though it requires knowledge of the number system. Therefore, such findings reflect the degree to which quantity is successfully transferred into symbols rather than informing about quantity representation per se. To solve this issue, we propose a three-factor account and incorporate it into the partly overlapping suggestions in the literature regarding the etiology of different DD profiles. The proposed view on DD is especially beneficial because it is applicable to more complex theories identifying a conglomerate of deficits as underlying cause of DD. PMID:29725316

  14. Experimental determination of Ramsey numbers.

    Science.gov (United States)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  15. What caused a large number of fatalities in the Tohoku earthquake?

    Science.gov (United States)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  16. Classical theory of algebraic numbers

    CERN Document Server

    Ribenboim, Paulo

    2001-01-01

    Gauss created the theory of binary quadratic forms in "Disquisitiones Arithmeticae" and Kummer invented ideals and the theory of cyclotomic fields in his attempt to prove Fermat's Last Theorem These were the starting points for the theory of algebraic numbers, developed in the classical papers of Dedekind, Dirichlet, Eisenstein, Hermite and many others This theory, enriched with more recent contributions, is of basic importance in the study of diophantine equations and arithmetic algebraic geometry, including methods in cryptography This book has a clear and thorough exposition of the classical theory of algebraic numbers, and contains a large number of exercises as well as worked out numerical examples The Introduction is a recapitulation of results about principal ideal domains, unique factorization domains and commutative fields Part One is devoted to residue classes and quadratic residues In Part Two one finds the study of algebraic integers, ideals, units, class numbers, the theory of decomposition, iner...

  17. CopyNumber450kCancer: baseline correction for accurate copy number calling from the 450k methylation array.

    Science.gov (United States)

    Marzouka, Nour-Al-Dain; Nordlund, Jessica; Bäcklin, Christofer L; Lönnerholm, Gudmar; Syvänen, Ann-Christine; Carlsson Almlöf, Jonas

    2016-04-01

    The Illumina Infinium HumanMethylation450 BeadChip (450k) is widely used for the evaluation of DNA methylation levels in large-scale datasets, particularly in cancer. The 450k design allows copy number variant (CNV) calling using existing bioinformatics tools. However, in cancer samples, numerous large-scale aberrations cause shifting in the probe intensities and thereby may result in erroneous CNV calling. Therefore, a baseline correction process is needed. We suggest the maximum peak of probe segment density to correct the shift in the intensities in cancer samples. CopyNumber450kCancer is implemented as an R package. The package with examples can be downloaded at http://cran.r-project.org nour.marzouka@medsci.uu.se Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  18. Production of Numbers about the Future

    DEFF Research Database (Denmark)

    Huikku, Jari; Mouritsen, Jan; Silvola, Hanna

    of prominent Finnish business managers, auditors, analysts, investors, financial supervisory authority, academics and media, the paper extends prior research which has used large data. The paper analyses impairment testing as a process where network of human and non-human actors produce numbers about...

  19. Novel applications of array comparative genomic hybridization in molecular diagnostics.

    Science.gov (United States)

    Cheung, Sau W; Bi, Weimin

    2018-05-31

    In 2004, the implementation of array comparative genomic hybridization (array comparative genome hybridization [CGH]) into clinical practice marked a new milestone for genetic diagnosis. Array CGH and single-nucleotide polymorphism (SNP) arrays enable genome-wide detection of copy number changes in a high resolution, and therefore microarray has been recognized as the first-tier test for patients with intellectual disability or multiple congenital anomalies, and has also been applied prenatally for detection of clinically relevant copy number variations in the fetus. Area covered: In this review, the authors summarize the evolution of array CGH technology from their diagnostic laboratory, highlighting exonic SNP arrays developed in the past decade which detect small intragenic copy number changes as well as large DNA segments for the region of heterozygosity. The applications of array CGH to human diseases with different modes of inheritance with the emphasis on autosomal recessive disorders are discussed. Expert commentary: An exonic array is a powerful and most efficient clinical tool in detecting genome wide small copy number variants in both dominant and recessive disorders. However, whole-genome sequencing may become the single integrated platform for detection of copy number changes, single-nucleotide changes as well as balanced chromosomal rearrangements in the near future.

  20. Enhancement of large fluctuations to extinction in adaptive networks

    Science.gov (United States)

    Hindes, Jason; Schwartz, Ira B.; Shaw, Leah B.

    2018-01-01

    During an epidemic, individual nodes in a network may adapt their connections to reduce the chance of infection. A common form of adaption is avoidance rewiring, where a noninfected node breaks a connection to an infected neighbor and forms a new connection to another noninfected node. Here we explore the effects of such adaptivity on stochastic fluctuations in the susceptible-infected-susceptible model, focusing on the largest fluctuations that result in extinction of infection. Using techniques from large-deviation theory, combined with a measurement of heterogeneity in the susceptible degree distribution at the endemic state, we are able to predict and analyze large fluctuations and extinction in adaptive networks. We find that in the limit of small rewiring there is a sharp exponential reduction in mean extinction times compared to the case of zero adaption. Furthermore, we find an exponential enhancement in the probability of large fluctuations with increased rewiring rate, even when holding the average number of infected nodes constant.

  1. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  2. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  3. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  4. Some relations between entropy and approximation numbers

    Institute of Scientific and Technical Information of China (English)

    郑志明

    1999-01-01

    A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.

  5. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  6. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  7. Comparative analyses of gene copy number and mRNA expression in GBM tumors and GBM xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Hodgson, J. Graeme; Yeh, Ru-Fang; Ray, Amrita; Wang, Nicholas J.; Smirnov, Ivan; Yu, Mamie; Hariono, Sujatmi; Silber, Joachim; Feiler, Heidi S.; Gray, Joe W.; Spellman, Paul T.; Vandenberg, Scott R.; Berger, Mitchel S.; James, C. David

    2009-04-03

    Development of model systems that recapitulate the molecular heterogeneity observed among glioblastoma multiforme (GBM) tumors will expedite the testing of targeted molecular therapeutic strategies for GBM treatment. In this study, we profiled DNA copy number and mRNA expression in 21 independent GBM tumor lines maintained as subcutaneous xenografts (GBMX), and compared GBMX molecular signatures to those observed in GBM clinical specimens derived from the Cancer Genome Atlas (TCGA). The predominant copy number signature in both tumor groups was defined by chromosome-7 gain/chromosome-10 loss, a poor-prognosis genetic signature. We also observed, at frequencies similar to that detected in TCGA GBM tumors, genomic amplification and overexpression of known GBM oncogenes, such as EGFR, MDM2, CDK6, and MYCN, and novel genes, including NUP107, SLC35E3, MMP1, MMP13, and DDX1. The transcriptional signature of GBMX tumors, which was stable over multiple subcutaneous passages, was defined by overexpression of genes involved in M phase, DNA replication, and chromosome organization (MRC) and was highly similar to the poor-prognosis mitosis and cell-cycle module (MCM) in GBM. Assessment of gene expression in TCGA-derived GBMs revealed overexpression of MRC cancer genes AURKB, BIRC5, CCNB1, CCNB2, CDC2, CDK2, and FOXM1, which form a transcriptional network important for G2/M progression and/or checkpoint activation. Our study supports propagation of GBM tumors as subcutaneous xenografts as a useful approach for sustaining key molecular characteristics of patient tumors, and highlights therapeutic opportunities conferred by this GBMX tumor panel for testing targeted therapeutic strategies for GBM treatment.

  8. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    International Nuclear Information System (INIS)

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  9. Advanced manipulator system for large hot cells

    International Nuclear Information System (INIS)

    Vertut, J.; Moreau, C.; Brossard, J.P.

    1981-01-01

    Large hot cells can be approached as extrapolated from smaller ones as wide, higher or longer in size with the same concept of using mechanical master slave manipulators and high density windows. This concept leads to a large number of working places and corresponding equipments, with a number of penetrations through the biological protection. When the large cell does not need a permanent operation of number of work places, as in particular to serve PIE machines and maintain the facility, use of servo manipulators with a large supporting unit and extensive use of television appears optimal. The advance on MA 23 and supports will be described including the extra facilities related to manipulators introduction and maintenance. The possibility to combine a powered manipulator and MA 23 (single or pair) on the same boom crane system will be described. An advance control system to bring the minimal dead time to control support movement, associated to the master slave arm operation is under development. The general television system includes over view cameras, associated with the limited number of windows, and manipulators camera. A special new system will be described which brings an automatic control of manipulator cameras and saves operator load and dead time. Full scale tests with MA 23 and support will be discussed. (author)

  10. Noise radiated by low-Reynolds number flows past a hemisphere at Ma = 0.3

    Science.gov (United States)

    Yao, Hua-Dong; Davidson, Lars; Eriksson, Lars-Erik

    2017-07-01

    Flows past a hemisphere and their noise generation are investigated at the Reynolds numbers (Re) of 1000 and 5000. The Mach number is 0.3. The computational method of the flows is large eddy simulation. The noise is computed using the Ffowcs Williams and Hawkings Formulation 1C (F1C). An integral surface with an open end is defined for the F1C. The end surface is removed to reduce the numerical contamination that is introduced by vortices passing this surface. However, the contamination cannot be completely reduced since a discontinuity of the flow quantities still exists at the open surface boundary. This problem is solved using a surface correction method, in which a buffer zone is set up at the end of the integral surface. The transformation of flow structures due to Re is explored. Large coherent structures are observable at low Re, whereas they diminish at high Re. A large amount of small-scale turbulent vortices occur in the latter case. It is found that these characteristics of the flows have an important influence on the noise generation in regard to the noise spectra. In the flows studied in this work, the fluctuating pressure on the walls is a negligible noise contributor as compared with the wake.

  11. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  12. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  13. Minimum number and best combinations of harvests to evaluate accessions of tomato plants from germplasm banks

    Directory of Open Access Journals (Sweden)

    Flávia Barbosa Abreu

    2006-01-01

    Full Text Available This study presents the minimum number and the best combination of tomato harvests needed to compare tomato accessions from germplasm banks. Number and weight of fruit in tomato plants are important as auxiliary traits in the evaluation of germplasm banks and should be studied simultaneously with other desirable characteristics such as pest and disease resistance, improved flavor and early production. Brazilian tomato breeding programs should consider not only the number of fruit but also fruit size because Brazilian consumers value fruit that are homogeneous, large and heavy. Our experiment was a randomized block design with three replicates of 32 tomato accessions from the Vegetable Germplasm Bank (Banco de Germoplasma de Hortaliças at the Federal University of Viçosa, Minas Gerais, Brazil plus two control cultivars (Debora Plus and Santa Clara. Nine harvests were evaluated for four production-related traits. The results indicate that six successive harvests are sufficient to compare tomato genotypes and germplasm bank accessions. Evaluation of genotypes according to the number of fruit requires analysis from the second to the seventh harvest. Evaluation of fruit weight by genotype requires analysis from the fourth to the ninth harvest. Evaluation of both number and weight of fruit require analysis from the second to the ninth harvest.

  14. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  15. A comparative study of near-wall turbulence in high and low Reynolds number boundary layers

    International Nuclear Information System (INIS)

    Metzger, M.M.; Klewicki, J.C.

    2001-01-01

    The present study explores the effects of Reynolds number, over three orders of magnitude, in the viscous wall region of a turbulent boundary layer. Complementary experiments were conducted both in the boundary layer wind tunnel at the University of Utah and in the atmospheric surface layer which flows over the salt flats of the Great Salt Lake Desert in western Utah. The Reynolds numbers, based on momentum deficit thickness, of the two flows were R θ =2x10 3 and R θ ≅5x10 6 , respectively. High-resolution velocity measurements were obtained from a five-element vertical rake of hot-wires spanning the buffer region. In both the low and high R θ flows, the length of the hot-wires measured less than 6 viscous units. To facilitate reliable comparisons, both the laboratory and field experiments employed the same instrumentation and procedures. Data indicate that, even in the immediate vicinity of the surface, strong influences from low-frequency motions at high R θ produce noticeable Reynolds number differences in the streamwise velocity and velocity gradient statistics. In particular, the peak value in the root mean square streamwise velocity profile, when normalized by viscous scales, was found to exhibit a logarithmic dependence on Reynolds number. The mean streamwise velocity profile, on the other hand, appears to be essentially independent of Reynolds number. Spectra and spatial correlation data suggest that low-frequency motions at high Reynolds number engender intensified local convection velocities which affect the structure of both the velocity and velocity gradient fields. Implications for turbulent production mechanisms and coherent motions in the buffer layer are discussed

  16. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    Science.gov (United States)

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  17. Maps on large-scale air quality concentrations in the Netherlands

    International Nuclear Information System (INIS)

    Velders, G.J.M.; Aben, J.M.M.; Beck, J.P.; Blom, W.F.; Van Dam, J.D.; Elzenga, H.E.; Geilenkirchen, G.P.; Hoen, A.; Jimmink, B.A.; Matthijsen, J.; Peek, C.J.; Van Velze, K.; Visser, H.; De Vries, W.J.

    2007-01-01

    emission factors and the number of kilometers driven by passenger cars used here result in lower NOx and PM10 emissions. Furthermore, the PM10 emissions associated with the storage and handling of dry bulk material have been adjusted, resulting in a 50% reduction compared with the emissions used last year. Estimates of local NO2 concentrations indicate that, on the basis of standing and proposed policies, the number of locations where the NO2 limit value is exceeded are expected to show a strong decrease close to motorways and in cities. The number of locations in cities where the NO2 limit value is exceeded are expected to decrease by more than three-quarters in 2015 compared to 2006. Estimates of local PM10 concentrations show the number of locations close to motorways and in cities (where the limit value for daily averaged PM10 concentrations is expected to be exceeded) to decrease by about three-quarters in 2010 compared to 2006. Future exceedances of limit values also depend on the effects of local measures. The concentration maps are available online at http://www.mnp.nl/gcn.html [nl

  18. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  19. Comparing the happiness effects of real and on-line friends.

    Directory of Open Access Journals (Sweden)

    John F Helliwell

    Full Text Available A recent large Canadian survey permits us to compare face-to-face ('real-life' and on-line social networks as sources of subjective well-being. The sample of 5,000 is drawn randomly from an on-line pool of respondents, a group well placed to have and value on-line friendships. We find three key results. First, the number of real-life friends is positively correlated with subjective well-being (SWB even after controlling for income, demographic variables and personality differences. Doubling the number of friends in real life has an equivalent effect on well-being as a 50% increase in income. Second, the size of online networks is largely uncorrelated with subjective well-being. Third, we find that real-life friends are much more important for people who are single, divorced, separated or widowed than they are for people who are married or living with a partner. Findings from large international surveys (the European Social Surveys 2002-2008 are used to confirm the importance of real-life social networks to SWB; they also indicate a significantly smaller value of social networks to married or partnered couples.

  20. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...

  1. IEEE Standard for Floating Point Numbers

    Indian Academy of Sciences (India)

    IAS Admin

    Floating point numbers are an important data type in compu- tation which is used ... quite large! Integers are ... exp, the value of the exponent will be taken as (exp –127). The ..... bit which is truncated is 1, add 1 to the least significant bit, else.

  2. The MIXMAX random number generator

    Science.gov (United States)

    Savvidy, Konstantin G.

    2015-11-01

    In this paper, we study the randomness properties of unimodular matrix random number generators. Under well-known conditions, these discrete-time dynamical systems have the highly desirable K-mixing properties which guarantee high quality random numbers. It is found that some widely used random number generators have poor Kolmogorov entropy and consequently fail in empirical tests of randomness. These tests show that the lowest acceptable value of the Kolmogorov entropy is around 50. Next, we provide a solution to the problem of determining the maximal period of unimodular matrix generators of pseudo-random numbers. We formulate the necessary and sufficient condition to attain the maximum period and present a family of specific generators in the MIXMAX family with superior performance and excellent statistical properties. Finally, we construct three efficient algorithms for operations with the MIXMAX matrix which is a multi-dimensional generalization of the famous cat-map. First, allowing to compute the multiplication by the MIXMAX matrix with O(N) operations. Second, to recursively compute its characteristic polynomial with O(N2) operations, and third, to apply skips of large number of steps S to the sequence in O(N2 log(S)) operations.

  3. Copy number variation analysis of matched ovarian primary tumors and peritoneal metastasis.

    Directory of Open Access Journals (Sweden)

    Joel A Malek

    Full Text Available Ovarian cancer is the most deadly gynecological cancer. The high rate of mortality is due to the large tumor burden with extensive metastatic lesion of the abdominal cavity. Despite initial chemosensitivity and improved surgical procedures, abdominal recurrence remains an issue and results in patients' poor prognosis. Transcriptomic and genetic studies have revealed significant genome pathologies in the primary tumors and yielded important information regarding carcinogenesis. There are, however, few studies on genetic alterations and their consequences in peritoneal metastatic tumors when compared to their matched ovarian primary tumors. We used high-density SNP arrays to investigate copy number variations in matched primary and metastatic ovarian cancer from 9 patients. Here we show that copy number variations acquired by ovarian tumors are significantly different between matched primary and metastatic tumors and these are likely due to different functional requirements. We show that these copy number variations clearly differentially affect specific pathways including the JAK/STAT and cytokine signaling pathways. While many have shown complex involvement of cytokines in the ovarian cancer environment we provide evidence that ovarian tumors have specific copy number variation differences in many of these genes.

  4. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  5. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  6. Talking probabilities: communicating probabilistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to

  7. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  8. Weak coupling large-N transitions at finite baryon density

    NARCIS (Netherlands)

    Hollowood, Timothy J.; Kumar, S. Prem; Myers, Joyce C.

    We study thermodynamics of free SU(N) gauge theory with a large number of colours and flavours on a three-sphere, in the presence of a baryon number chemical potential. Reducing the system to a holomorphic large-N matrix integral, paying specific attention to theories with scalar flavours (squarks),

  9. Microarray MAPH: accurate array-based detection of relative copy number in genomic DNA

    Directory of Open Access Journals (Sweden)

    Chan Alan

    2006-06-01

    Full Text Available Abstract Background Current methods for measurement of copy number do not combine all the desirable qualities of convenience, throughput, economy, accuracy and resolution. In this study, to improve the throughput associated with Multiplex Amplifiable Probe Hybridisation (MAPH we aimed to develop a modification based on the 3-Dimensional, Flow-Through Microarray Platform from PamGene International. In this new method, electrophoretic analysis of amplified products is replaced with photometric analysis of a probed oligonucleotide array. Copy number analysis of hybridised probes is based on a dual-label approach by comparing the intensity of Cy3-labelled MAPH probes amplified from test samples co-hybridised with similarly amplified Cy5-labelled reference MAPH probes. The key feature of using a hybridisation-based end point with MAPH is that discrimination of amplified probes is based on sequence and not fragment length. Results In this study we showed that microarray MAPH measurement of PMP22 gene dosage correlates well with PMP22 gene dosage determined by capillary MAPH and that copy number was accurately reported in analyses of DNA from 38 individuals, 12 of which were known to have Charcot-Marie-Tooth disease type 1A (CMT1A. Conclusion Measurement of microarray-based endpoints for MAPH appears to be of comparable accuracy to electrophoretic methods, and holds the prospect of fully exploiting the potential multiplicity of MAPH. The technology has the potential to simplify copy number assays for genes with a large number of exons, or of expanded sets of probes from dispersed genomic locations.

  10. Microarray MAPH: accurate array-based detection of relative copy number in genomic DNA.

    Science.gov (United States)

    Gibbons, Brian; Datta, Parikkhit; Wu, Ying; Chan, Alan; Al Armour, John

    2006-06-30

    Current methods for measurement of copy number do not combine all the desirable qualities of convenience, throughput, economy, accuracy and resolution. In this study, to improve the throughput associated with Multiplex Amplifiable Probe Hybridisation (MAPH) we aimed to develop a modification based on the 3-Dimensional, Flow-Through Microarray Platform from PamGene International. In this new method, electrophoretic analysis of amplified products is replaced with photometric analysis of a probed oligonucleotide array. Copy number analysis of hybridised probes is based on a dual-label approach by comparing the intensity of Cy3-labelled MAPH probes amplified from test samples co-hybridised with similarly amplified Cy5-labelled reference MAPH probes. The key feature of using a hybridisation-based end point with MAPH is that discrimination of amplified probes is based on sequence and not fragment length. In this study we showed that microarray MAPH measurement of PMP22 gene dosage correlates well with PMP22 gene dosage determined by capillary MAPH and that copy number was accurately reported in analyses of DNA from 38 individuals, 12 of which were known to have Charcot-Marie-Tooth disease type 1A (CMT1A). Measurement of microarray-based endpoints for MAPH appears to be of comparable accuracy to electrophoretic methods, and holds the prospect of fully exploiting the potential multiplicity of MAPH. The technology has the potential to simplify copy number assays for genes with a large number of exons, or of expanded sets of probes from dispersed genomic locations.

  11. Estimating and comparing microbial diversity in the presence of sequencing errors

    Science.gov (United States)

    Chiu, Chun-Huo

    2016-01-01

    approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  12. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  13. Two-group interfacial area concentration correlations of two-phase flows in large diameter pipes

    International Nuclear Information System (INIS)

    Shen, Xiuzhong; Hibiki, Takashi

    2015-01-01

    The reliable empirical correlations and models are one of the important ways to predict the interfacial area concentration (IAC) in two-phase flows. However, up to now, no correlation or model is available for the prediction of the IAC in the two-phase flows in large diameter pipes. This study collected an IAC experimental database of two-phase flows taken under various flow conditions in large diameter pipes and presented a systematic way to predict the IAC for two-phase flows from bubbly, cap-bubbly to churn flow in large diameter pipes by categorizing bubbles into two groups (group-1: spherical and distorted bubble, group-2: cap bubble). Correlations were developed to predict the group-1 void fraction from the void fraction of all bubble. The IAC contribution from group-1 bubbles was modeled by using the dominant parameters of group-1 bubble void fraction and Reynolds number based on the parameter-dependent analysis of Hibiki and Ishii (2001, 2002) using one-dimensional bubble number density and interfacial area transport equations. A new drift velocity correlation for two-phase flow with large cap bubbles in large diameter pipes was derived in this study. By comparing the newly-derived drift velocity correlation with the existing drift velocity correlation of Kataoka and Ishii (1987) for large diameter pipes and using the characteristics of the representative bubbles among the group 2 bubbles, we developed the model of IAC and bubble size for group 2 cap bubbles. The developed models for estimating the IAC are compared with the entire collected database. A reasonable agreement was obtained with average relative errors of ±28.1%, ±54.4% and ±29.6% for group 1, group 2 and all bubbles respectively. (author)

  14. THE DECAY OF A WEAK LARGE-SCALE MAGNETIC FIELD IN TWO-DIMENSIONAL TURBULENCE

    Energy Technology Data Exchange (ETDEWEB)

    Kondić, Todor; Hughes, David W.; Tobias, Steven M., E-mail: t.kondic@leeds.ac.uk [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2016-06-01

    We investigate the decay of a large-scale magnetic field in the context of incompressible, two-dimensional magnetohydrodynamic turbulence. It is well established that a very weak mean field, of strength significantly below equipartition value, induces a small-scale field strong enough to inhibit the process of turbulent magnetic diffusion. In light of ever-increasing computer power, we revisit this problem to investigate fluids and magnetic Reynolds numbers that were previously inaccessible. Furthermore, by exploiting the relation between the turbulent diffusion of the magnetic potential and that of the magnetic field, we are able to calculate the turbulent magnetic diffusivity extremely accurately through the imposition of a uniform mean magnetic field. We confirm the strong dependence of the turbulent diffusivity on the product of the magnetic Reynolds number and the energy of the large-scale magnetic field. We compare our findings with various theoretical descriptions of this process.

  15. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  16. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    International Nuclear Information System (INIS)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.; Goode, P. R.; Cao, W.; Ozguc, A.; Rozelot, J. P.

    2011-01-01

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers from their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.

  17. Talking probabilities: communicating probalistic information with words and numbers

    NARCIS (Netherlands)

    Renooij, S.; Witteman, C.L.M.

    1999-01-01

    The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to provide

  18. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  19. Comparative Study of Surface-lattice-site Resolved Neutralization of Slow Multicharged Ions during Large-angle Quasi-binary Collisions with Au(110): Simulation and Experiment

    International Nuclear Information System (INIS)

    Meyer, F.W.

    2001-01-01

    In this article we extend our earlier studies of the azimuthal dependences of low energy projectiles scattered in large angle quasi-binary collisions from Au(110). Measurements are presented for 20 keV Ar 9+ at normal incidence, which are compared with our earlier measurements for this ion at 5 keV and 10 0 incidence angle. A deconvolution procedure based on MARLOWE simulation results carried out at both energies provides information about the energy dependence of projectile neutralization during interactions just with the atoms along the top ridge of the reconstructed Au(110) surface corrugation, in comparison to, e.g., interactions with atoms lying on the sidewalls. To test the sensitivity of the agreement between the MARLOWE results and the experimental measurements, we show simulation results obtained for a non-reconstructed Au(110) surface with 20 keV Ar projectiles, and for different scattering potentials that are intended to simulate the effects on scattering trajectory of a projectile inner shell vacancy surviving the binary collision, In addition, simulation results are shown for a number of different total scattering angles, to illustrate their utility in finding optimum values for this parameter prior to the actual measurements

  20. Small numbers are sensed directly, high numbers constructed from size and density.

    Science.gov (United States)

    Zimmermann, Eckart

    2018-04-01

    Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Trainable estimators for indirect people counting : a comparative study

    NARCIS (Netherlands)

    Acampora, G.; Loia, V.; Percannella, G.; Vento, M.

    2011-01-01

    Estimating the number of people in a scene is a very relevant issue due to the possibility of using it in a large number of contexts where it is necessary to automatically monitor an area for security/safety reasons, for economic purposes, etc. The large number of people counting approaches

  2. International comparative studies of education and large scale change

    NARCIS (Netherlands)

    Howie, Sarah; Plomp, T.; Bascia, Nina; Cumming, Alister; Datnow, Amanda; Leithwood, Kenneth; Livingstone, David

    2005-01-01

    The development of international comparative studies of educational achievements dates back to the early 1960s and was made possible by developments in sample survey methodology, group testing techniques, test development, and data analysis (Husén & Tuijnman, 1994, p. 6). The studies involve

  3. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  4. Podocyte Number in Children and Adults: Associations with Glomerular Size and Numbers of Other Glomerular Resident Cells

    Science.gov (United States)

    Puelles, Victor G.; Douglas-Denton, Rebecca N.; Cullen-McEwen, Luise A.; Li, Jinhua; Hughson, Michael D.; Hoy, Wendy E.; Kerr, Peter G.

    2015-01-01

    Increases in glomerular size occur with normal body growth and in many pathologic conditions. In this study, we determined associations between glomerular size and numbers of glomerular resident cells, with a particular focus on podocytes. Kidneys from 16 male Caucasian-Americans without overt renal disease, including 4 children (≤3 years old) to define baseline values of early life and 12 adults (≥18 years old), were collected at autopsy in Jackson, Mississippi. We used a combination of immunohistochemistry, confocal microscopy, and design-based stereology to estimate individual glomerular volume (IGV) and numbers of podocytes, nonepithelial cells (NECs; tuft cells other than podocytes), and parietal epithelial cells (PECs). Podocyte density was calculated. Data are reported as medians and interquartile ranges (IQRs). Glomeruli from children were small and contained 452 podocytes (IQR=335–502), 389 NECs (IQR=265–498), and 146 PECs (IQR=111–206). Adult glomeruli contained significantly more cells than glomeruli from children, including 558 podocytes (IQR=431–746; P<0.01), 1383 NECs (IQR=998–2042; P<0.001), and 367 PECs (IQR=309–673; P<0.001). However, large adult glomeruli showed markedly lower podocyte density (183 podocytes per 106 µm3) than small glomeruli from adults and children (932 podocytes per 106 µm3; P<0.001). In conclusion, large adult glomeruli contained more podocytes than small glomeruli from children and adults, raising questions about the origin of these podocytes. The increased number of podocytes in large glomeruli does not match the increase in glomerular size observed in adults, resulting in relative podocyte depletion. This may render hypertrophic glomeruli susceptible to pathology. PMID:25568174

  5. Needs assessment and the organization of eldercare provision in the modern welfare state – a comparative perspective

    DEFF Research Database (Denmark)

    Hansen, Morten Balle

    Comparative studies of home care, eldercare and social care generally indicate that a large number of industrialized countries are facing common challenges. These challenges are caused by the demographical developments of an aging population, changed labor market conditions and changed family...

  6. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  7. Comparative Dermatoglyphic Study between Autistic Patients and Normal People in Iran

    OpenAIRE

    Mansoureh Kazemi; Mohammad Reza Fayyazi-Bordbar; Nasser Mahdavi-Shahri

    2017-01-01

    Autism is a neurodevelopmental disorder originating from early childhood; nevertheless, its diagnosis is in older ages. In addition to heredity, environmental factors are also of great significance in the etiology of the disease. Dermatoglyphic patterns, albeit varied, remain stable for a lifetime and yield a large number of patterns upon examination. Studies have shown a significant association between dermatoglyphics and some diseases, especially genetic ones. We compared fingerprints betwe...

  8. Comparison of a large and small-calibre tube drain for managing spontaneous pneumothoraces.

    Science.gov (United States)

    Benton, Ian J; Benfield, Grant F A

    2009-10-01

    To compare treatment success of large- and small-bore chest drains in the treatment of spontaneous pneumothoraces the case-notes were reviewed of those admitted to our hospital with a total of 73 pneumothoraces and who were treated by trainee doctors of varying experience. Both a large- and a small-bore intercostal tube drain system were in use during the two-year period reviewed. Similar pneumothorax profile and numbers treated with both drains were recorded, resulting in a similar drain time and numbers of successful and failed re-expansion of pneumothoraces. Successful pneumothorax resolution was the same for both drain types and the negligible tube drain complications observed with the small-bore drain reflected previously reported experiences. However the large-bore drain was associated with a high complication rate (32%) with more infectious complications (24%). The small-bore drain was prone to displacement (21%). There was generally no evidence of an increased failure and morbidity, reflecting poorer expertise, in the non-specialist trainees managing the pneumothoraces. A practical finding however was that in those large pneumothoraces where re-expansion failed, the tip of the drain had not been sited at the apex of the pleural cavity irrespective of the drain type inserted.

  9. Plume structure in high-Rayleigh-number convection

    Science.gov (United States)

    Puthenveettil, Baburaj A.; Arakeri, Jaywant H.

    2005-10-01

    Near-wall structures in turbulent natural convection at Rayleigh numbers of 10^{10} to 10^{11} at A Schmidt number of 602 are visualized by a new method of driving the convection across a fine membrane using concentration differences of sodium chloride. The visualizations show the near-wall flow to consist of sheet plumes. A wide variety of large-scale flow cells, scaling with the cross-section dimension, are observed. Multiple large-scale flow cells are seen at aspect ratio (AR)= 0.65, while only a single circulation cell is detected at AR= 0.435. The cells (or the mean wind) are driven by plumes coming together to form columns of rising lighter fluid. The wind in turn aligns the sheet plumes along the direction of shear. the mean wind direction is seen to change with time. The near-wall dynamics show plumes initiated at points, which elongate to form sheets and then merge. Increase in rayleigh number results in a larger number of closely and regularly spaced plumes. The plume spacings show a common log normal probability distribution function, independent of the rayleigh number and the aspect ratio. We propose that the near-wall structure is made of laminar natural-convection boundary layers, which become unstable to give rise to sheet plumes, and show that the predictions of a model constructed on this hypothesis match the experiments. Based on these findings, we conclude that in the presence of a mean wind, the local near-wall boundary layers associated with each sheet plume in high-rayleigh-number turbulent natural convection are likely to be laminar mixed convection type.

  10. How few countries will do? Comparative survey analysis from a Bayesian perspective

    Directory of Open Access Journals (Sweden)

    Joop J.C.M. Hox

    2012-07-01

    Full Text Available Meuleman and Billiet (2009 have carried out a simulation study aimed at the question how many countries are needed for accurate multilevel SEM estimation in comparative studies. The authors concluded that a sample of 50 to 100 countries is needed for accurate estimation. Recently, Bayesian estimation methods have been introduced in structural equation modeling which should work well with much lower sample sizes. The current study reanalyzes the simulation of Meuleman and Billiet using Bayesian estimation to find the lowest number of countries needed when conducting multilevel SEM. The main result of our simulations is that a sample of about 20 countries is sufficient for accurate Bayesian estimation, which makes multilevel SEM practicable for the number of countries commonly available in large scale comparative surveys.

  11. Comparison of Large Eddy Simulations and κ-ε Modelling of Fluid Velocity and Tracer Concentration in Impinging Jet Mixers

    Directory of Open Access Journals (Sweden)

    Wojtas Krzysztof

    2015-06-01

    Full Text Available Simulations of turbulent mixing in two types of jet mixers were carried out using two CFD models, large eddy simulation and κ-ε model. Modelling approaches were compared with experimental data obtained by the application of particle image velocimetry and planar laser-induced fluorescence methods. Measured local microstructures of fluid velocity and inert tracer concentration can be used for direct validation of numerical simulations. Presented results show that for higher tested values of jet Reynolds number both models are in good agreement with the experiments. Differences between models were observed for lower Reynolds numbers when the effects of large scale inhomogeneity are important.

  12. Large field-of-view transmission line resonator for high field MRI

    DEFF Research Database (Denmark)

    Zhurbenko, Vitaliy; Johannesson, Kristjan Sundgaard; Boer, Vincent

    2016-01-01

    Transmission line resonators is often a preferable choice for coils in high field magnetic resonance imaging (MRI), because they provide a number of advantages over traditional loop coils. The size of such resonators, however, is limited to shorter than half a wavelength due to high standing wave....... Achieved magnetic field distribution is compared to the conventional transmission line resonator. Imaging experiments are performed using 7 Tesla MRI system. The developed resonator is useful for building coils with large field-of-view....

  13. Towards Development of Clustering Applications for Large-Scale Comparative Genotyping and Kinship Analysis Using Y-Short Tandem Repeats.

    Science.gov (United States)

    Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki

    2015-06-01

    Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.

  14. Beyond left and right: Automaticity and flexibility of number-space associations.

    Science.gov (United States)

    Antoine, Sophie; Gevers, Wim

    2016-02-01

    Close links exist between the processing of numbers and the processing of space: relatively small numbers are preferentially associated with a left-sided response while relatively large numbers are associated with a right-sided response (the SNARC effect). Previous work demonstrated that the SNARC effect is triggered in an automatic manner and is highly flexible. Besides the left-right dimension, numbers associate with other spatial response mappings such as close/far responses, where small numbers are associated with a close response and large numbers with a far response. In two experiments we investigate the nature of this association. Associations between magnitude and close/far responses were observed using a magnitude-irrelevant task (Experiment 1: automaticity) and using a variable referent task (Experiment 2: flexibility). While drawing a strong parallel between both response mappings, the present results are also informative with regard to the question about what type of processing mechanism underlies both the SNARC effect and the association between numerical magnitude and close/far response locations.

  15. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  16. Fast IMRT by increasing the beam number and reducing the number of segments

    International Nuclear Information System (INIS)

    Bratengeier, Klaus; Gainey, Mark B; Flentje, Michael

    2011-01-01

    The purpose of this work is to develop fast deliverable step and shoot IMRT technique. A reduction in the number of segments should theoretically be possible, whilst simultaneously maintaining plan quality, provided that the reduction is accompanied by an increased number of gantry angles. A benefit of this method is that the segment shaping could be performed during gantry motion, thereby reducing the delivery time. The aim was to find classes of such solutions whose plan quality can compete with conventional IMRT. A planning study was performed. Step and shoot IMRT plans were created using direct machine parameter optimization (DMPO) as a reference. DMPO plans were compared to an IMRT variant having only one segment per angle ('2-Step Fast'). 2-Step Fast is based on a geometrical analysis of the topology of the planning target volume (PTV) and the organs at risk (OAR). A prostate/rectum case, spine metastasis/spinal cord, breast/lung and an artificial PTV/OAR combination of the ESTRO-Quasimodo phantom were used for the study. The composite objective value (COV), a quality score, and plan delivery time were compared. The delivery time for the DMPO reference plan and the 2-Step Fast IMRT technique was measured and calculated for two different linacs, a twelve year old Siemens Primus™ ('old' linac) and two Elekta Synergy™ 'S' linacs ('new' linacs). 2-Step Fast had comparable or better quality than the reference DMPO plan. The number of segments was smaller than for the reference plan, the number of gantry angles was between 23 and 34. For the modern linac the delivery time was always smaller than that for the reference plan. The calculated (measured) values showed a mean delivery time reduction of 21% (21%) for the new linac, and of 7% (3%) for the old linac compared to the respective DMPO reference plans. For the old linac, the data handling time per beam was the limiting factor for the treatment time reduction. 2-Step

  17. Fast IMRT by increasing the beam number and reducing the number of segments

    Directory of Open Access Journals (Sweden)

    Bratengeier Klaus

    2011-12-01

    Full Text Available Abstract Purpose The purpose of this work is to develop fast deliverable step and shoot IMRT technique. A reduction in the number of segments should theoretically be possible, whilst simultaneously maintaining plan quality, provided that the reduction is accompanied by an increased number of gantry angles. A benefit of this method is that the segment shaping could be performed during gantry motion, thereby reducing the delivery time. The aim was to find classes of such solutions whose plan quality can compete with conventional IMRT. Materials/Methods A planning study was performed. Step and shoot IMRT plans were created using direct machine parameter optimization (DMPO as a reference. DMPO plans were compared to an IMRT variant having only one segment per angle ("2-Step Fast". 2-Step Fast is based on a geometrical analysis of the topology of the planning target volume (PTV and the organs at risk (OAR. A prostate/rectum case, spine metastasis/spinal cord, breast/lung and an artificial PTV/OAR combination of the ESTRO-Quasimodo phantom were used for the study. The composite objective value (COV, a quality score, and plan delivery time were compared. The delivery time for the DMPO reference plan and the 2-Step Fast IMRT technique was measured and calculated for two different linacs, a twelve year old Siemens Primus™ ("old" linac and two Elekta Synergy™ "S" linacs ("new" linacs. Results 2-Step Fast had comparable or better quality than the reference DMPO plan. The number of segments was smaller than for the reference plan, the number of gantry angles was between 23 and 34. For the modern linac the delivery time was always smaller than that for the reference plan. The calculated (measured values showed a mean delivery time reduction of 21% (21% for the new linac, and of 7% (3% for the old linac compared to the respective DMPO reference plans. For the old linac, the data handling time per beam was the limiting factor for the treatment time

  18. Energy transfers in dynamos with small magnetic Prandtl numbers

    KAUST Repository

    Kumar, Rohit

    2015-06-25

    We perform numerical simulation of dynamo with magnetic Prandtl number Pm = 0.2 on 10243 grid, and compute the energy fluxes and the shell-to-shell energy transfers. These computations indicate that the magnetic energy growth takes place mainly due to the energy transfers from large-scale velocity field to large-scale magnetic field and that the magnetic energy flux is forward. The steady-state magnetic energy is much smaller than the kinetic energy, rather than equipartition; this is because the magnetic Reynolds number is near the dynamo transition regime. We also contrast our results with those for dynamo with Pm = 20 and decaying dynamo. © 2015 Taylor & Francis.

  19. The future of large old trees in urban landscapes.

    Science.gov (United States)

    Le Roux, Darren S; Ikin, Karen; Lindenmayer, David B; Manning, Adrian D; Gibbons, Philip

    2014-01-01

    Large old trees are disproportionate providers of structural elements (e.g. hollows, coarse woody debris), which are crucial habitat resources for many species. The decline of large old trees in modified landscapes is of global conservation concern. Once large old trees are removed, they are difficult to replace in the short term due to typically prolonged time periods needed for trees to mature (i.e. centuries). Few studies have investigated the decline of large old trees in urban landscapes. Using a simulation model, we predicted the future availability of native hollow-bearing trees (a surrogate for large old trees) in an expanding city in southeastern Australia. In urban greenspace, we predicted that the number of hollow-bearing trees is likely to decline by 87% over 300 years under existing management practices. Under a worst case scenario, hollow-bearing trees may be completely lost within 115 years. Conversely, we predicted that the number of hollow-bearing trees will likely remain stable in semi-natural nature reserves. Sensitivity analysis revealed that the number of hollow-bearing trees perpetuated in urban greenspace over the long term is most sensitive to the: (1) maximum standing life of trees; (2) number of regenerating seedlings ha(-1); and (3) rate of hollow formation. We tested the efficacy of alternative urban management strategies and found that the only way to arrest the decline of large old trees requires a collective management strategy that ensures: (1) trees remain standing for at least 40% longer than currently tolerated lifespans; (2) the number of seedlings established is increased by at least 60%; and (3) the formation of habitat structures provided by large old trees is accelerated by at least 30% (e.g. artificial structures) to compensate for short term deficits in habitat resources. Immediate implementation of these recommendations is needed to avert long term risk to urban biodiversity.

  20. How math anxiety relates to number-space associations

    Directory of Open Access Journals (Sweden)

    Carrie Georges

    2016-09-01

    Full Text Available Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioural evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number-space associations constitute a potential risk factor of math anxiety.

  1. How Math Anxiety Relates to Number-Space Associations.

    Science.gov (United States)

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2016-01-01

    Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number-space associations constitute a potential risk factor of math anxiety.

  2. Long-period variables in the Large Magellanic Cloud. II. Infrared photometry, spectral classification, AGB evolution, and spatial distribution

    International Nuclear Information System (INIS)

    Hughes, S.M.G.; Wood, P.R.

    1990-01-01

    Infrared JHK photometry and visual spectra have been obtained for a large sample of long-period variables (LPVs) in the Large Magellanic Cloud (LMC). Various aspects of the asymptotic giant branch (AGB) evolution of LPVs are discussed using these data. The birth/death rate of LPVs of different ages in the LMC is compared with the birth rates of appropriate samples of planetary nebulas, clump stars, Cepheids, and OH/IR stars. It appears that there are much fewer large-amplitude LPVs per unit galactic stellar mass in the LMC than in the Galaxy. It is suggested that this may be due to the fact that the evolved intermediate-age AGB stars in the LMC often turn into carbon stars, which tend to have smaller pulsation amplitudes than M stars. There is also a major discrepancy between the number of LPVs in the LMC (and in the Galaxy) and the number predicted by the theories of AGB evolution, pulsation, and mass loss. A distance modulus to the LMC of 18.66 + or - 0.05 is derived by comparing the LMC LPVs with P about 200 days with the 47 Tucanae Mira variables in the (K, log P) plane. 64 refs

  3. Number conserving approach in quasiparticle representation

    International Nuclear Information System (INIS)

    Oudih, M.R.; Fellah, M.; Allal, N.H.

    2003-01-01

    An exact number conserving approach is formulated in the quasiparticle representation to show the effect of the particle-number projection on the ground and the first 0+ excited states. It is applied to the two-level pairing model, which allows an exact solution and a comparison to other approaches. The present method has proved to be an advantageous alternative as compared to the BCS and to the usual methods used to restore the particle number symmetry. (author)

  4. Improving the Comparability and Local Usefulness of International Assessments: A Look Back and a Way Forward

    Science.gov (United States)

    Rutkowski, Leslie; Rutkowski, David

    2018-01-01

    Over time international large-scale assessments have grown in terms of number of studies, cycles, and participating countries, many of which are a heterogeneous mix of economies, languages, cultures, and geography. This heterogeneity has meaningful consequences for comparably measuring both achievement and non-achievement constructs, such as…

  5. Rheosensing by impulsive cells at intermediate Reynolds numbers

    Science.gov (United States)

    Mathijssen, Arnold; Bhamla, Saad; Prakash, Manu

    2017-11-01

    For aquatic organisms, mechanical signals are often carried by the surrounding liquid, through viscous and inertial forces. Here we consider a unicellular yet millimetric ciliate, Spirostomum ambiguum, as a model organism to study hydrodynamic sensing. This protist typically swims at moderate Reynolds numbers, Re 100 during impulsive contractions where its elongated body recoils within milliseconds. First, using high-speed PIV and an electrophysiology setup, we deliver controlled voltage pulses to induce these rapid contractions and visualise the vortex flows generated thereby. By comparing these measurements with CFD simulations the range of these hydrodynamic ``signals'' is characterized. Second, we probe the mechano-sensing of the organism with externally applied flows and find a critical shear rate necessary to trigger a contraction. The combination of high Re flow generation and rheosensing could facilitate intercellular communication over large distances. Please also see our other talk ``Collective hydrodynamic communication through ultra-fast contractions''.

  6. Act on Numbers: Numerical Magnitude Influences Selection and Kinematics of Finger Movement

    Directory of Open Access Journals (Sweden)

    Rosa Rugani

    2017-08-01

    Full Text Available In the past decade hand kinematics has been reliably adopted for investigating cognitive processes and disentangling debated topics. One of the most controversial issues in numerical cognition literature regards the origin – cultural vs. genetically driven – of the mental number line (MNL, oriented from left (small numbers to right (large numbers. To date, the majority of studies have investigated this effect by means of response times, whereas studies considering more culturally unbiased measures such as kinematic parameters are rare. Here, we present a new paradigm that combines a “free response” task with the kinematic analysis of movement. Participants were seated in front of two little soccer goals placed on a table, one on the left and one on the right side. They were presented with left- or right-directed arrows and they were instructed to kick a small ball with their right index toward the goal indicated by the arrow. In a few test trials participants were presented also with a small (2 or a large (8 number, and they were allowed to choose the kicking direction. Participants performed more left responses with the small number and more right responses with the large number. The whole kicking movement was segmented in two temporal phases in order to make a hand kinematics’ fine-grained analysis. The Kick Preparation and Kick Finalization phases were selected on the basis of peak trajectory deviation from the virtual midline between the two goals. Results show an effect of both small and large numbers on action execution timing. Participants were faster to finalize the action when responding to small numbers toward the left and to large number toward the right. Here, we provide the first experimental demonstration which highlights how numerical processing affects action execution in a new and not-overlearned context. The employment of this innovative and unbiased paradigm will permit to disentangle the role of nature and culture

  7. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  8. Polynomial selection in number field sieve for integer factorization

    Directory of Open Access Journals (Sweden)

    Gireesh Pandey

    2016-09-01

    Full Text Available The general number field sieve (GNFS is the fastest algorithm for factoring large composite integers which is made up by two prime numbers. Polynomial selection is an important step of GNFS. The asymptotic runtime depends on choice of good polynomial pairs. In this paper, we present polynomial selection algorithm that will be modelled with size and root properties. The correlations between polynomial coefficient and number of relations have been explored with experimental findings.

  9. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    Science.gov (United States)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  10. Effective field theories in the large-N limit

    International Nuclear Information System (INIS)

    Weinberg, S.

    1997-01-01

    Various effective field theories in four dimensions are shown to have exact nontrivial solutions in the limit as the number N of fields of some type becomes large. These include extended versions of the U (N) Gross-Neveu model, the nonlinear O(N) σ model, and the CP N-1 model. Although these models are not renormalizable in the usual sense, the infinite number of coupling types allows a complete cancellation of infinities. These models provide qualitative predictions of the form of scattering amplitudes for arbitrary momenta, but because of the infinite number of free parameters, it is possible to derive quantitative predictions only in the limit of small momenta. For small momenta the large-N limit provides only a modest simplification, removing at most a finite number of diagrams to each order in momenta, except near phase transitions, where it reduces the infinite number of diagrams that contribute for low momenta to a finite number. copyright 1997 The American Physical Society

  11. How small firms contrast with large firms regarding perceptions, practices, and needs in the U.S

    Science.gov (United States)

    Urs Buehlmann; Matthew Bumgardner; Michael. Sperber

    2013-01-01

    As many larger secondary woodworking firms have moved production offshore and been adversely impacted by the recent housing downturn, smaller firms have become important to driving U.S. hardwood demand. This study compared and contrasted small and large firms on a number of factors to help determine the unique characteristics of small firms and to provide insights into...

  12. Large N lattice QCD and its extended strong-weak connection to the hypersphere

    International Nuclear Information System (INIS)

    Christensen, Alexander S.; Myers, Joyce C.; Pedersen, Peter D.

    2014-01-01

    We calculate an effective Polyakov line action of QCD at large N c and large N f from a combined lattice strong coupling and hopping expansion working to second order in both, where the order is defined by the number of windings in the Polyakov line. We compare with the action, truncated at the same order, of continuum QCD on S 1 ×S d at weak coupling from one loop perturbation theory, and find that a large N c correspondence of equations of motion found in http://dx.doi.org/10.1007/JHEP10(2012)067 at leading order, can be extended to the next order. Throughout the paper, we review the background necessary for computing higher order corrections to the lattice effective action, in order to make higher order comparisons more straightforward

  13. Distribution of squares modulo a composite number

    OpenAIRE

    Aryan, Farzad

    2015-01-01

    In this paper we study the distribution of squares modulo a square-free number $q$. We also look at inverse questions for the large sieve in the distribution aspect and we make improvements on existing results on the distribution of $s$-tuples of reduced residues.

  14. Comparative analysis on arthroscopic sutures of large and extensive rotator cuff injuries in relation to the degree of osteopenia

    Directory of Open Access Journals (Sweden)

    Alexandre Almeida

    2015-02-01

    Full Text Available OBJECTIVE: To analyze the results from arthroscopic suturing of large and extensive rotator cuff injuries, according to the patient's degree of osteopenia.METHOD: 138 patients who underwent arthroscopic suturing of large and extensive rotator cuff injuries between 2003 and 2011 were analyzed. Those operated from October 2008 onwards formed a prospective cohort, while the remainder formed a retrospective cohort. Also from October 2008 onwards, bone densitometry evaluation was requested at the time of the surgical treatment. For the patients operated before this date, densitometry examinations performed up to two years before or after the surgical treatment were investigated. The patients were divided into three groups. Those with osteoporosis formed group 1 (n = 16; those with osteopenia, group 2 (n = 33; and normal individuals, group 3 (n = 55.RESULTS: In analyzing the University of California at Los Angeles (UCLA scores of group 3 and comparing them with group 2, no statistically significant difference was seen (p = 0.070. Analysis on group 3 in comparison with group 1 showed a statistically significant difference (p = 0.027.CONCLUSION: The results from arthroscopic suturing of large and extensive rotator cuff injuries seem to be influenced by the patient's bone mineral density, as assessed using bone densitometry.

  15. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  16. Analytical and Experimental Investigation of Inlet-engine Matching for Turbojet-powered Aircraft at Mach Numbers up to 2.0

    Science.gov (United States)

    Esenwein, Fred T; Schueller, Carl F

    1952-01-01

    An analysis of inlet-turbojet-engine matching for a range of Mach numbers up to 2.0 indicates large performance penalties when fixed-geometry inlets are used. Use of variable-geometry inlets, however, nearly eliminates th The analysis was confirmed experimentally by investigating at Mach numbers of 0, 0.63, and 1.5 to 2.0 two single oblique-shock-type inlets of different compression-ramp angles, which simulated a variable-geometry configuration. The experimental investigation indicated that total-pressure recoveries comparable withose attainable with well designed nose inlets were obtained with the side inlets when all the boundary layer ahead of the inlets was removed. Serious drag penalties resulted at a Mach number of 2.0 from the use of blunt-cowl leading edges. However, sharp-lip inlets produced large losses in thrust for the take-off condition. These thrust penalties which are associated with the the low-speed operation of the sharp-lip inlet designs can probably be avoided without impairing the supersonic performance of the inlet by the use of auxiliary inlets or blow-in doors.

  17. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  18. Number needed to treat and number needed to harm with paliperidone palmitate relative to long-acting haloperidol, bromperidol, and fluphenazine decanoate for treatment of patients with schizophrenia

    Directory of Open Access Journals (Sweden)

    Srihari Gopal

    2011-03-01

    Full Text Available Srihari Gopal1, Joris Berwaerts1, Isaac Nuamah1, Kasem Akhras2, Danielle Coppola1, Ella Daly1, David Hough1, Joseph Palumbo11Johnson & Johnson Pharmaceutical Research & Development, LLC, Raritan, NJ, USA; 2Johnson & Johnson Pharmaceutical Services, LLC, Raritan, NJ, USABackground: We analyzed data retrieved through a PubMed search of randomized, placebo-controlled trials of first-generation antipsychotic long-acting injectables (haloperidol decanoate, bromperidol decanoate, and fluphenazine decanoate, and a company database of paliperidone palmitate, to compare the benefit-risk ratio in patients with schizophrenia.Methods: From the eight studies that met our selection criteria, two efficacy and six safety parameters were selected for calculation of number needed to treat (NNT, number needed to harm (NNH, and the likelihood of being helped or harmed (LHH using comparisons of active drug relative to placebo. NNTs for prevention of relapse ranged from 2 to 5 for paliperidone palmitate, haloperidol decanoate, and fluphenazine decanoate, indicating a moderate to large effect size.Results: Among the selected maintenance studies, NNH varied considerably, but indicated a lower likelihood of encountering extrapyramidal side effects, such as akathisia, tremor, and tardive dyskinesia, with paliperidone palmitate versus placebo than with first-generation antipsychotic depot agents versus placebo. This was further supported by an overall higher NNH for paliperidone palmitate versus placebo with respect to anticholinergic use and Abnormal Involuntary Movement Scale positive score. LHH for preventing relapse versus use of anticholinergics was 15 for paliperidone palmitate and 3 for fluphenazine decanoate, favoring paliperidone palmitate.Conclusion: Overall, paliperidone palmitate had a similar NNT and a more favorable NNH compared with the first-generation long-acting injectables assessed.Keywords: long-acting injectables, first-generation antipsychotics

  19. Study of depression influencing factors with zero-inflated regression models in a large-scale population survey

    OpenAIRE

    Xu, Tao; Zhu, Guangjin; Han, Shaomei

    2017-01-01

    Objectives The number of depression symptoms can be considered as count data in order to get complete and accurate analyses findings in studies of depression. This study aims to compare the goodness of fit of four count outcomes models by a large survey sample to identify the optimum model for a risk factor study of the number of depression symptoms. Methods 15 820 subjects, aged 10 to 80 years old, who were not suffering from serious chronic diseases and had not run a high fever in the past ...

  20. Placement of endosseous implant in infected alveolar socket with large fenestration defect: A comparative case report

    Directory of Open Access Journals (Sweden)

    Balaji Anitha

    2010-01-01

    Full Text Available Placement of endosseous implants into infected bone is often deferred or avoided due to fear of failure. However, with the development of guided bone regeneration [GBR], some implantologists have reported successful implant placement in infected sockets, even those with fenestration defects. We had the opportunity to compare the osseointegration of an immediate implant placed in an infected site associated with a large buccal fenestration created by the removal of a root stump with that of a delayed implant placed 5 years after extraction. Both implants were placed in the same patient, in the same dental quadrant by the same implantologist. GBR was used with the fenestration defect being filled with demineralized bone graftFNx01 and covered with collagen membraneFNx08. Both implants were osseointegrated and functional when followed up after 12 months.

  1. Performance of Arch-Style Road Crossing Structures from Relative Movement Rates of Large Mammals

    Directory of Open Access Journals (Sweden)

    A. Z. Andis

    2017-10-01

    Full Text Available In recent decades, an increasing number of highway construction and reconstruction projects have included mitigation measures aimed at reducing wildlife-vehicle collisions and maintaining habitat connectivity for wildlife. The most effective and robust measures include wildlife fences combined with wildlife underpasses and overpasses. The 39 wildlife crossing structures included along a 90 km stretch of US Highway 93 on the Flathead Indian Reservation in western Montana represent one of the most extensive of such projects. We measured movements of large mammal species at 15 elliptical arch-style wildlife underpasses and adjacent habitat between April and November 2015. We investigated if the movements of large mammals through the underpasses were similar to large mammal movements in the adjacent habitat. Across all structures, large mammals (all species combined were more likely to move through the structures than pass at a random location in the surrounding habitat. At the species level, white-tailed deer (Odocoileus virginianus and mule deer (O. hemionus used the underpasses significantly more than could be expected based on their movement through the surrounding habitat. However, carnivorous species such as, black bear (Ursus americanus and coyote (Canis latrans moved through the underpasses in similar numbers compared to the surrounding habitat.

  2. Commercial Internet Adoption in China: Comparing the Experience of Small, Medium and Large Businesses.

    Science.gov (United States)

    Riquelme, Hernan

    2002-01-01

    Describes a study of small, medium, and large enterprises in Shanghai, China that investigated which size companies benefit the most from the Internet. Highlights include leveling the ground for small and medium enterprises (SMEs); increased sales and cost savings for large companies; and competitive advantages. (LRW)

  3. Preterm Cord Blood Contains a Higher Proportion of Immature Hematopoietic Progenitors Compared to Term Samples.

    Science.gov (United States)

    Podestà, Marina; Bruschettini, Matteo; Cossu, Claudia; Sabatini, Federica; Dagnino, Monica; Romantsik, Olga; Spaggiari, Grazia Maria; Ramenghi, Luca Antonio; Frassoni, Francesco

    2015-01-01

    Cord blood contains high number of hematopoietic cells that after birth disappear. In this paper we have studied the functional properties of the umbilical cord blood progenitor cells collected from term and preterm neonates to establish whether quantitative and/or qualitative differences exist between the two groups. Our results indicate that the percentage of total CD34+ cells was significantly higher in preterm infants compared to full term: 0.61% (range 0.15-4.8) vs 0.3% (0.032-2.23) p = 0.0001 and in neonates <32 weeks of gestational age (GA) compared to those ≥32 wks GA: 0.95% (range 0.18-4.8) and 0.36% (0.15-3.2) respectively p = 0.0025. The majority of CD34+ cells co-expressed CD71 antigen (p<0.05 preterm vs term) and grew in vitro large BFU-E, mostly in the second generation. The subpopulations CD34+CD38- and CD34+CD45- resulted more represented in preterm samples compared to term, conversely, Side Population (SP) did not show any difference between the two group. The absolute number of preterm colonies (CFCs/10microL) resulted higher compared to term (p = 0.004) and these progenitors were able to grow until the third generation maintaining an higher proportion of CD34+ cells (p = 0.0017). The number of colony also inversely correlated with the gestational age (Pearson r = -0.3001 p<0.0168). We found no differences in the isolation and expansion capacity of Endothelial Colony Forming Cells (ECFCs) from cord blood of term and preterm neonates: both groups grew in vitro large number of endothelial cells until the third generation and showed a transitional phenotype between mesenchymal stem cells and endothelial progenitors (CD73, CD31, CD34 and CD144)The presence, in the cord blood of preterm babies, of high number of immature hematopoietic progenitors and endothelial/mesenchymal stem cells with high proliferative potential makes this tissue an important source of cells for developing new cells therapies.

  4. Preterm Cord Blood Contains a Higher Proportion of Immature Hematopoietic Progenitors Compared to Term Samples.

    Directory of Open Access Journals (Sweden)

    Marina Podestà

    Full Text Available Cord blood contains high number of hematopoietic cells that after birth disappear. In this paper we have studied the functional properties of the umbilical cord blood progenitor cells collected from term and preterm neonates to establish whether quantitative and/or qualitative differences exist between the two groups.Our results indicate that the percentage of total CD34+ cells was significantly higher in preterm infants compared to full term: 0.61% (range 0.15-4.8 vs 0.3% (0.032-2.23 p = 0.0001 and in neonates <32 weeks of gestational age (GA compared to those ≥32 wks GA: 0.95% (range 0.18-4.8 and 0.36% (0.15-3.2 respectively p = 0.0025. The majority of CD34+ cells co-expressed CD71 antigen (p<0.05 preterm vs term and grew in vitro large BFU-E, mostly in the second generation. The subpopulations CD34+CD38- and CD34+CD45- resulted more represented in preterm samples compared to term, conversely, Side Population (SP did not show any difference between the two group. The absolute number of preterm colonies (CFCs/10microL resulted higher compared to term (p = 0.004 and these progenitors were able to grow until the third generation maintaining an higher proportion of CD34+ cells (p = 0.0017. The number of colony also inversely correlated with the gestational age (Pearson r = -0.3001 p<0.0168.We found no differences in the isolation and expansion capacity of Endothelial Colony Forming Cells (ECFCs from cord blood of term and preterm neonates: both groups grew in vitro large number of endothelial cells until the third generation and showed a transitional phenotype between mesenchymal stem cells and endothelial progenitors (CD73, CD31, CD34 and CD144The presence, in the cord blood of preterm babies, of high number of immature hematopoietic progenitors and endothelial/mesenchymal stem cells with high proliferative potential makes this tissue an important source of cells for developing new cells therapies.

  5. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Outcome of Large to Massive Rotator Cuff Tears Repaired With and Without Extracellular Matrix Augmentation: A Prospective Comparative Study.

    Science.gov (United States)

    Gilot, Gregory J; Alvarez-Pinzon, Andres M; Barcksdale, Leticia; Westerdahl, David; Krill, Michael; Peck, Evan

    2015-08-01

    To compare the results of arthroscopic repair of large to massive rotator cuff tears (RCTs) with or without augmentation using an extracellular matrix (ECM) graft and to present ECM graft augmentation as a valuable surgical alternative used for biomechanical reinforcement in any RCT repair. We performed a prospective, blinded, single-center, comparative study of patients who underwent arthroscopic repair of a large to massive RCT with or without augmentation with ECM graft. The primary outcome was assessed by the presence or absence of a retear of the previously repaired rotator cuff, as noted on ultrasound examination. The secondary outcomes were patient satisfaction evaluated preoperatively and postoperatively using the 12-item Short Form Health Survey, the American Shoulder and Elbow Surgeons shoulder outcome score, a visual analog scale score, the Western Ontario Rotator Cuff index, and a shoulder activity level survey. We enrolled 35 patients in the study: 20 in the ECM-augmented rotator cuff repair group and 15 in the control group. The follow-up period ranged from 22 to 26 months, with a mean of 24.9 months. There was a significant difference between the groups in terms of the incidence of retears: 26% (4 retears) in the control group and 10% (2 retears) in the ECM graft group (P = .0483). The mean pain level decreased from 6.9 to 4.1 in the control group and from 6.8 to 0.9 in the ECM graft group (P = .024). The American Shoulder and Elbow Surgeons score improved from 62.1 to 72.6 points in the control group and from 63.8 to 88.9 points (P = .02) in the treatment group. The mean Short Form 12 scores improved in the 2 groups, with a statistically significant difference favoring graft augmentation (P = .031), and correspondingly, the Western Ontario Rotator Cuff index scores improved in both arms, favoring the treatment group (P = .0412). The use of ECM for augmentation of arthroscopic repairs of large to massive RCTs reduces the incidence of retears

  7. A Comparative Assessment of Epidemiologically Different Cutaneous Leishmaniasis Outbreaks in Madrid, Spain and Tolima, Colombia: An Estimation of the Reproduction Number via a Mathematical Model

    Directory of Open Access Journals (Sweden)

    Anuj Mubayi

    2018-04-01

    Full Text Available Leishmaniasis is a neglected tropical disease caused by the Leishmania parasite and transmitted by the Phlebotominae subfamily of sandflies, which infects humans and other mammals. Clinical manifestations of the disease include cutaneous leishmaniasis (CL, mucocutaneous leishmaniasis (MCL and visceral leishmaniasis (VL with a majority (more than three-quarters of worldwide cases being CL. There are a number of risk factors for CL, such as the presence of multiple reservoirs, the movement of individuals, inequality, and social determinants of health. However, studies related to the role of these factors in the dynamics of CL have been limited. In this work, we (i develop and analyze a vector-borne epidemic model to study the dynamics of CL in two ecologically distinct CL-affected regions—Madrid, Spain and Tolima, Colombia; (ii derived three different methods for the estimation of model parameters by reducing the dimension of the systems; (iii estimated reproduction numbers for the 2010 outbreak in Madrid and the 2016 outbreak in Tolima; and (iv compared the transmission potential of the two economically-different regions and provided different epidemiological metrics that can be derived (and used for evaluating an outbreak, once R0 is known and additional data are available. On average, Spain has reported only a few hundred CL cases annually, but in the course of the outbreak during 2009–2012, a much higher number of cases than expected were reported and that too in the single city of Madrid. Cases in humans were accompanied by sharp increase in infections among domestic dogs, the natural reservoir of CL. On the other hand, CL has reemerged in Colombia primarily during the last decade, because of the frequent movement of military personnel to domestic regions from forested areas, where they have increased exposure to vectors. In 2016, Tolima saw an unexpectedly high number of cases leading to two successive outbreaks. On comparing, we

  8. A comparative analysis of biclustering algorithms for gene expression data

    Science.gov (United States)

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  9. Large Eddy Simulation (LES for IC Engine Flows

    Directory of Open Access Journals (Sweden)

    Kuo Tang-Wei

    2013-10-01

    Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.

  10. Visual stimulus parameters seriously compromise the measurement of approximate number system acuity and comparative effects between adults and children

    Directory of Open Access Journals (Sweden)

    Denes eSzucs

    2013-07-01

    Full Text Available It has been suggested that a simple non-symbolic magnitude comparison task is sufficient to measure the acuity of a putative Approximate Number System (ANS. A proposed measure of the ANS, the so-called 'internal Weber fraction' (w, would provide a clear measure of ANS acuity. However, ANS studies have never presented adequate evidence that the visual stimulus parameters did not compromise measurements of w to such extent that w is actually driven by visual instead of numerical processes. We therefore investigated this question by testing non-symbolic magnitude discrimination in seven-year-old children and adults. We controlled for visual parameters in a more stringent manner than usual. As a consequence of these controls, in some trials numerical cues correlated positively with number while in others they correlated negatively with number. This congruency effect strongly correlated with w, which means that congruency effects were probably driving effects in w. Consequently, in both adults and children congruency had a major impact on the fit of the model underlying the computation of w. Furthermore, children showed larger congruency effects than adults. This suggests that ANS tasks are seriously compromised by the visual stimulus parameters, which cannot be controlled. Hence, they are not pure measures of the ANS and some putative w or ratio effect differences between children and adults in previous ANS studies may be due to the differential influence of the visual stimulus parameters in children and adults. In addition, because the resolution of congruency effects relies on inhibitory (interference suppression function, some previous ANS findings were probably influenced by the developmental state of inhibitory processes especially when comparing children with developmental dyscalculia and typically developing children.

  11. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  12. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  13. Chernobyl today and compared to other disasters

    International Nuclear Information System (INIS)

    Lindner, L.

    2000-01-01

    The disaster in Unit 4 of the nuclear power plant of Chernobyl, now Ukraine, occurred fourteen years ago. Although much has been written about the accident, the public still has no proper yardstick by which to assess realistically the risk involved. This is true not only with respect to nuclear power plants of the type found in Germany and almost anywhere in the western world, but also in relation to non-nuclear disasters, which tend to be accepted by the public much more readily. As far as the number of persons killed or injured is concerned, the scope of the Chernobyl disaster turned out to be smaller than, or at least comparable to, other disasters. This is true even in comparison with other power generation technologies, for instance, accidents in coal mining or dam bursts. Even major railway accidents, airplane crashes, or the large number of people regularly killed in road traffic, are soon forgotten by the media. (orig.) [de

  14. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  15. Introduction: Scaling and structure in high Reynolds number wall-bounded flows

    International Nuclear Information System (INIS)

    McKeon, B.J.; Sreenivasan, K.R.

    2007-05-01

    The papers discussed in this report are dealing with the following aspects: Fundamental scaling relations for canonical flows and asymptotic approach to infinite Reynolds numbers; large and very large scales in near-wall turbulences; the influence of roughness and finite Reynolds number effects; comparison between internal and external flows and the universality of the near-wall region; qualitative and quantitative models of the turbulent boundary layer; the neutrally stable atmospheric surface layer as a model for a canonical zero-pressure-gradient boundary layer (author)

  16. Drug allergies documented in electronic health records of a large healthcare system.

    Science.gov (United States)

    Zhou, L; Dhopeshwarkar, N; Blumenthal, K G; Goss, F; Topaz, M; Slight, S P; Bates, D W

    2016-09-01

    The prevalence of drug allergies documented in electronic health records (EHRs) of large patient populations is understudied. We aimed to describe the prevalence of common drug allergies and patient characteristics documented in EHRs of a large healthcare network over the last two decades. Drug allergy data were obtained from EHRs of patients who visited two large tertiary care hospitals in Boston from 1990 to 2013. The prevalence of each drug and drug class was calculated and compared by sex and race/ethnicity. The number of allergies per patient was calculated and the frequency of patients having 1, 2, 3…, or 10+ drug allergies was reported. We also conducted a trend analysis by comparing the proportion of each allergy to the total number of drug allergies over time. Among 1 766 328 patients, 35.5% of patients had at least one reported drug allergy with an average of 1.95 drug allergies per patient. The most commonly reported drug allergies in this population were to penicillins (12.8%), sulfonamide antibiotics (7.4%), opiates (6.8%), and nonsteroidal anti-inflammatory drugs (NSAIDs) (3.5%). The relative proportion of allergies to angiotensin-converting enzyme (ACE) inhibitors and HMG CoA reductase inhibitors (statins) have more than doubled since early 2000s. Drug allergies were most prevalent among females and white patients except for NSAIDs, ACE inhibitors, and thiazide diuretics, which were more prevalent in black patients. Females and white patients may be more likely to experience a reaction from common medications. An increase in reported allergies to ACE inhibitors and statins is noteworthy. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey

  18. The number of expats is rather stable

    DEFF Research Database (Denmark)

    Andersen, Torben

    2008-01-01

    aggregate data from the Danish economist’s and the engineer’s trade unions show that during the last decade there has been stagnation in the number of expatriates. Taking into consideration that the three trade unions cover the very large majority of Danish knowledge workers occupying foreign jobs...

  19. Radical Software. Number Two. The Electromagnetic Spectrum.

    Science.gov (United States)

    Korot, Beryl, Ed.; Gershuny, Phyllis, Ed.

    1970-01-01

    In an effort to foster the innovative uses of television technology, this tabloid format periodical details social, educational, and artistic experiments with television and lists a large number of experimental videotapes available from various television-centered groups and individuals. The principal areas explored in this issue include cable…

  20. On the binary expansions of algebraic numbers

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.; Crandall, Richard E.; Pomerance, Carl

    2003-07-01

    Employing concepts from additive number theory, together with results on binary evaluations and partial series, we establish bounds on the density of 1's in the binary expansions of real algebraic numbers. A central result is that if a real y has algebraic degree D > 1, then the number {number_sign}(|y|, N) of 1-bits in the expansion of |y| through bit position N satisfies {number_sign}(|y|, N) > CN{sup 1/D} for a positive number C (depending on y) and sufficiently large N. This in itself establishes the transcendency of a class of reals {summation}{sub n{ge}0} 1/2{sup f(n)} where the integer-valued function f grows sufficiently fast; say, faster than any fixed power of n. By these methods we re-establish the transcendency of the Kempner--Mahler number {summation}{sub n{ge}0}1/2{sup 2{sup n}}, yet we can also handle numbers with a substantially denser occurrence of 1's. Though the number z = {summation}{sub n{ge}0}1/2{sup n{sup 2}} has too high a 1's density for application of our central result, we are able to invoke some rather intricate number-theoretical analysis and extended computations to reveal aspects of the binary structure of z{sup 2}.

  1. Interaction of Number Magnitude and Auditory Localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Jungilligens, Johannes; Getzmann, Stephan

    2016-01-01

    The interplay of perception and memory is very evident when we perceive and then recognize familiar stimuli. Conversely, information in long-term memory may also influence how a stimulus is perceived. Prior work on number cognition in the visual modality has shown that in Western number systems long-term memory for the magnitude of smaller numbers can influence performance involving the left side of space, while larger numbers have an influence toward the right. Here, we investigated in the auditory modality whether a related effect may bias the perception of sound location. Subjects (n = 28) used a swivel pointer to localize noise bursts presented from various azimuth positions. The noise bursts were preceded by a spoken number (1-9) or, as a nonsemantic control condition, numbers that were played in reverse. The relative constant error in noise localization (forward minus reversed speech) indicated a systematic shift in localization toward more central locations when the number was smaller and toward more peripheral positions when the preceding number magnitude was larger. These findings do not support the traditional left-right number mapping. Instead, the results may reflect an overlap between codes for number magnitude and codes for sound location as implemented by two channel models of sound localization, or possibly a categorical mapping stage of small versus large magnitudes. © The Author(s) 2015.

  2. Total dominator chromatic number of a graph

    Directory of Open Access Journals (Sweden)

    Adel P. Kazemi

    2015-06-01

    Full Text Available Given a graph $G$, the total dominator coloring problem seeks a proper coloring of $G$ with the additional property that every vertex in the graph is adjacent to all vertices of a color class. We seek to minimize the number of color classes. We initiate to study this problem on several classes of graphs, as well as finding general bounds and characterizations. We also compare the total dominator chromatic number of a graph with the chromatic number and the total domination number of it.

  3. The large-s field-reversed configuration experiment

    International Nuclear Information System (INIS)

    Hoffman, A.L.; Carey, L.N.; Crawford, E.A.; Harding, D.G.; DeHart, T.E.; McDonald, K.F.; McNeil, J.L.; Milroy, R.D.; Slough, J.T.; Maqueda, R.; Wurden, G.A.

    1993-01-01

    The Large-s Experiment (LSX) was built to study the formation and equilibrium properties of field-reversed configurations (FRCs) as the scale size increases. The dynamic, field-reversed theta-pinch method of FRC creation produces axial and azimuthal deformations and makes formation difficult, especially in large devices with large s (number of internal gyroradii) where it is difficult to achieve initial plasma uniformity. However, with the proper technique, these formation distortions can be minimized and are then observed to decay with time. This suggests that the basic stability and robustness of FRCs formed, and in some cases translated, in smaller devices may also characterize larger FRCs. Elaborate formation controls were included on LSX to provide the initial uniformity and symmetry necessary to minimize formation disturbances, and stable FRCs could be formed up to the design goal of s = 8. For x ≤ 4, the formation distortions decayed away completely, resulting in symmetric equilibrium FRCs with record confinement times up to 0.5 ms, agreeing with previous empirical scaling laws (τ∝sR). Above s = 4, reasonably long-lived (up to 0.3 ms) configurations could still be formed, but the initial formation distortions were so large that they never completely decayed away, and the equilibrium confinement was degraded from the empirical expectations. The LSX was only operational for 1 yr, and it is not known whether s = 4 represents a fundamental limit for good confinement in simple (no ion beam stabilization) FRCs or whether it simply reflects a limit of present formation technology. Ideally, s could be increased through flux buildup from neutral beams. Since the addition of kinetic or beam ions will probably be desirable for heating, sustainment, and further stabilization of magnetohydrodynamic modes at reactor-level s values, neutral beam injection is the next logical step in FRC development. 24 refs., 21 figs., 2 tabs

  4. Maps on large-scale air quality concentrations in the Netherlands. Report on 2008

    International Nuclear Information System (INIS)

    Velders, G.J.M.; Aben, J.M.M.; Blom, W.F.; Van Dam, J.D.; Elzenga, H.E.; Geilenkirchen, G.P.; Hammingh, P.; Hoen, A.; Jimmink, B.A.; Koelemeijer, R.B.A.; Matthijsen, J.; Peek, C.J.; Schilderman, C.B.W.; Van der Sluis, O.C.; De Vries, W.J.

    2008-01-01

    Decrease expected in the number of locations exceeding the air quality limit values In the Netherlands, the number of locations were the European limit values for particulate matter and nitrogen dioxide will be exceeded is expected to decrease by 70-90%, in the period up to 2011, respectively 2015. The limit value for particulate matter from 2011 onwards, and for nitrogen dioxide from 2015 onwards, is expected to be exceeded at a small number of locations in the Netherlands, based on standing and proposed Dutch and European policies. These locations are situated mainly in the Randstad, Netherlands, in the vicinity of motorway around the large cities and in the busiest streets in large cities. Whether the limit values will actually be exceeded depends also on local policies and meteorological fluctuations. This estimate is based on large-scale concentration maps (called GCN maps) of air quality components and on additional local contributions. The concentration maps provide the best possible estimate of large-scale air quality. The degree of uncertainty about the local concentrations of particulate matter and nitrogen dioxide is estimated to be approximately 20%. This report presents the methods used to produce the GCN maps and the included emissions. It also shows the differences with respect to the maps of 2007. These maps are used by local, provincial and other authorities. MNP emphasises to keep the uncertainties in the concentrations in mind when using these maps for planning, or when comparing concentrations with limit values. This also applies to the selecting of local measures to improve the air quality. The concentration maps are available online, at http://www.mnp.nl/gcn.html [nl

  5. GRIP LANGLEY AEROSOL RESEARCH GROUP EXPERIMENT (LARGE) V1

    Data.gov (United States)

    National Aeronautics and Space Administration — Langley Aerosol Research Group Experiment (LARGE) measures ultrafine aerosol number density, total and non-volatile aerosol number density, dry aerosol size...

  6. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Number-conserving random phase approximation with analytically integrated matrix elements

    International Nuclear Information System (INIS)

    Kyotoku, M.; Schmid, K.W.; Gruemmer, F.; Faessler, A.

    1990-01-01

    In the present paper a number conserving random phase approximation is derived as a special case of the recently developed random phase approximation in general symmetry projected quasiparticle mean fields. All the occurring integrals induced by the number projection are performed analytically after writing the various overlap and energy matrices in the random phase approximation equation as polynomials in the gauge angle. In the limit of a large number of particles the well-known pairing vibration matrix elements are recovered. We also present a new analytically number projected variational equation for the number conserving pairing problem

  8. DNS/LES Simulations of Separated Flows at High Reynolds Numbers

    Science.gov (United States)

    Balakumar, P.

    2015-01-01

    Direct numerical simulations (DNS) and large-eddy simulations (LES) simulations of flow through a periodic channel with a constriction are performed using the dynamic Smagorinsky model at two Reynolds numbers of 2800 and 10595. The LES equations are solved using higher order compact schemes. DNS are performed for the lower Reynolds number case using a fine grid and the data are used to validate the LES results obtained with a coarse and a medium size grid. LES simulations are also performed for the higher Reynolds number case using a coarse and a medium size grid. The results are compared with an existing reference data set. The DNS and LES results agreed well with the reference data. Reynolds stresses, sub-grid eddy viscosity, and the budgets for the turbulent kinetic energy are also presented. It is found that the turbulent fluctuations in the normal and spanwise directions have the same magnitude. The turbulent kinetic energy budget shows that the production peaks near the separation point region and the production to dissipation ratio is very high on the order of five in this region. It is also observed that the production is balanced by the advection, diffusion, and dissipation in the shear layer region. The dominant term is the turbulent diffusion that is about two times the molecular dissipation.

  9. Gaming the Law of Large Numbers

    Science.gov (United States)

    Hoffman, Thomas R.; Snapp, Bart

    2012-01-01

    Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…

  10. Mating competitiveness of sterile male Anopheles coluzzii in large cages.

    Science.gov (United States)

    Maïga, Hamidou; Damiens, David; Niang, Abdoulaye; Sawadogo, Simon P; Fatherhaman, Omnia; Lees, Rosemary S; Roux, Olivier; Dabiré, Roch K; Ouédraogo, Georges A; Tripet, Fréderic; Diabaté, Abdoulaye; Gilles, Jeremie R L

    2014-11-26

    Understanding the factors that account for male mating competitiveness is critical to the development of the sterile insect technique (SIT). Here, the effects of partial sterilization with 90 Gy of radiation on sexual competitiveness of Anopheles coluzzii allowed to mate in different ratios of sterile to untreated males have been assessed. Moreover, competitiveness was compared between males allowed one versus two days of contact with females. Sterile and untreated males four to six days of age were released in large cages (~1.75 sq m) with females of similar age at the following ratios of sterile males: untreated males: untreated virgin females: 100:100:100, 300:100:100, 500:100:100 (three replicates of each) and left for two days. Competitiveness was determined by assessing the egg hatch rate and the insemination rate, determined by dissecting recaptured females. An additional experiment was conducted with a ratio of 500:100:100 and a mating period of either one or two days. Two controls of 0:100:100 (untreated control) and 100:0:100 (sterile control) were used in each experiment. When males and females consort for two days with different ratios, a significant difference in insemination rate was observed between ratio treatments. The competitiveness index (C) of sterile males compared to controls was 0.53. The number of days of exposure to mates significantly increased the insemination rate, as did the increased number of males present in the untreated: sterile male ratio treatments, but the number of days of exposure did not have any effect on the hatch rate. The comparability of the hatch rates between experiments suggest that An. coluzzii mating competitiveness experiments in large cages could be run for one instead of two days, shortening the required length of the experiment. Sterilized males were half as competitive as untreated males, but an effective release ratio of at least five sterile for one untreated male has the potential to impact the fertility of

  11. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  12. All-optical fast random number generator.

    Science.gov (United States)

    Li, Pu; Wang, Yun-Cai; Zhang, Jian-Zhong

    2010-09-13

    We propose a scheme of all-optical random number generator (RNG), which consists of an ultra-wide bandwidth (UWB) chaotic laser, an all-optical sampler and an all-optical comparator. Free from the electric-device bandwidth, it can generate 10Gbit/s random numbers in our simulation. The high-speed bit sequences can pass standard statistical tests for randomness after all-optical exclusive-or (XOR) operation.

  13. Random numbers spring from alpha decay

    International Nuclear Information System (INIS)

    Frigerio, N.A.; Sanathanan, L.P.; Morley, M.; Clark, N.A.; Tyler, S.A.

    1980-05-01

    Congruential random number generators, which are widely used in Monte Carlo simulations, are deficient in that the number they generate are concentrated in a relatively small number of hyperplanes. While this deficiency may not be a limitation in small Monte Carlo studies involving a few variables, it introduces a significant bias in large simulations requiring high resolution. This bias was recognized and assessed during preparations for an accident analysis study of nuclear power plants. This report describes a random number device based on the radioactive decay of alpha particles from a 235 U source in a high-resolution gas proportional counter. The signals were fed to a 4096-channel analyzer and for each channel the frequency of signals registered in a 20,000-microsecond interval was recorded. The parity bits of these frequency counts (0 for an even count and 1 for an odd count) were then assembled in sequence to form 31-bit binary random numbers and transcribed to a magnetic tape. This cycle was repeated as many times as were necessary to create 3 million random numbers. The frequency distribution of counts from the present device conforms to the Brockwell-Moyal distribution, which takes into account the dead time of the counter (both the dead time and decay constant of the underlying Poisson process were estimated). Analysis of the count data and tests of randomness on a sample set of the 31-bit binary numbers indicate that this random number device is a highly reliable source of truly random numbers. Its use is, therefore, recommended in Monte Carlo simulations for which the congruential pseudorandom number generators are found to be inadequate. 6 figures, 5 tables

  14. Damköhler number effects on soot formation and growth in turbulent nonpremixed flames

    KAUST Repository

    Attili, Antonio

    2015-01-01

    The effect of Damköhler number on turbulent nonpremixed sooting flames is investigated via large scale direct numerical simulation in three-dimensional n-heptane/air jet flames at a jet Reynolds number of 15,000 and at three different Damköhler numbers. A reduced chemical mechanism, which includes the soot precursor naphthalene, and a high-order method of moments are employed. At the highest Damköhler number, local extinction is negligible, while flames holes are observed in the two lowest Damköhler number cases. Compared to temperature and other species controlled by fuel oxidation chemistry, naphthalene is found to be affected more significantly by the Damköhler number. Consequently, the overall soot mass fraction decreases by more than one order of magnitude for a fourfold decrease of the Damköhler number. On the contrary, the overall number density of soot particles is approximately the same, but its distribution in mixture fraction space is different in the three cases. The total soot mass growth rate is found to be proportional to the Damköhler number. In the two lowest Da number cases, soot leakage across the flame is observed. Leveraging Lagrangian statistics, it is concluded that soot leakage is due to patches of soot that cross the stoichiometric surface through flame holes. These results show the leading order effects of turbulent mixing in controlling the dynamics of soot in turbulent flames. © 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

  15. The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane.

    Science.gov (United States)

    Hesse, Philipp Nikolaus; Bremmer, Frank

    2017-01-01

    The existence of an association between numbers and space is known for a long time. The most prominent demonstration of this relationship is the spatial numerical association of response codes (SNARC) effect, describing the fact that participants' reaction times are shorter with the left hand for small numbers and with the right hand for large numbers, when being asked to judge the parity of a number (Dehaene et al., J. Exp. Psychol., 122, 371-396, 1993). The SNARC effect is commonly seen as support for the concept of a mental number line, i.e. a mentally conceived line where small numbers are represented more on the left and large numbers are represented more on the right. The SNARC effect has been demonstrated for all three cardinal axes and recently a transverse SNARC plane has been reported (Chen et al., Exp. Brain Res., 233(5), 1519-1528, 2015). Here, by employing saccadic responses induced by auditory or visual stimuli, we measured the SNARC effect within the same subjects along the horizontal (HM) and vertical meridian (VM) and along the two interspersed diagonals. We found a SNARC effect along HM and VM, which allowed predicting the occurrence of a SNARC effect along the two diagonals by means of linear regression. Importantly, significant differences in SNARC strength were found between modalities. Our results suggest the existence of a frontoparallel mental number plane, where small numbers are represented left and down, while large numbers are represented right and up. Together with the recently described transverse mental number plane our findings provide further evidence for the existence of a three-dimensional mental number space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Topological relics of symmetry breaking: winding numbers and scaling tilts from random vortex–antivortex pairs

    International Nuclear Information System (INIS)

    Zurek, W H

    2013-01-01

    I show that random distributions of vortex–antivortex pairs (rather than of individual vortices) lead to scaling of typical winding numbers W trapped inside a loop of circumference C with the square root of that circumference, W∼√C, when the expected winding numbers are large, |W| ≫ 1. Such scaling is consistent with the Kibble–Zurek mechanism (KZM), with 〈W 2 〉 inversely proportional to ξ-hat , the typical size of the domain that can break symmetry in unison. (The dependence of ξ-hat on quench rate is predicted by KZM from critical exponents of the phase transition.) Thus, according to KZM, the dispersion √ 2 > scales as √(C/ ξ-hat ) for large W. By contrast, a distribution of individual vortices with randomly assigned topological charges would result in the dispersion scaling with the square root of the area inside C (i.e., √ 2 > ∼ C). Scaling of the dispersion of W as well as of the probability of detection of non-zero W with C and ξ-hat can be also studied for loops so small that non-zero windings are rare. In this case I show that dispersion varies not as 1/√( ξ-hat ), but as 1/ ξ-hat , which results in a doubling of the scaling of dispersion with the quench rate when compared to the large |W| regime. Moreover, the probability of trapping of non-zero W becomes approximately equal to 〈W 2 〉, and scales as 1/ ξ-hat 2 . This quadruples—as compared with √ 2 > ≃ √C/ξ-circumflex valid for large W—the exponent in the power law dependence of the frequency of trapping of |W| = 1 on ξ-hat when the probability of |W| > 1 is negligible. This change of the power law exponent by a factor of four—from 1/√( ξ-hat ) for the dispersion of large W to 1/ ξ-hat 2 for the frequency of non-zero W when |W| > 1 is negligibly rare—is of paramount importance for experimental tests of KZM. (paper)

  17. Seed size-number trade-off in Euterpe edulis in plant communities of the Atlantic Forest

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Santin Brancalion

    2014-06-01

    Full Text Available Investigations of seed size and number differences among plant populations growing in contrasting habitats can provide relevant information about ecological strategies that optimize reproductive effort. This may imply important consequences for biodiversity conservation and restoration. Therefore, we sought to investigate seed size-number trade-off in Euterpe edulis populations growing in plant communities in the Brazilian Atlantic Forest. Seed dry mass and seed number per bunch were evaluated in 2008 and 2009 in large remnants of the Seasonally Dry Forest, Restinga Forest and Atlantic Rainforest in southeastern Brazil, in 20 individuals per site and year. Seed size and seed number varied among forest types, but a seed size-number trade-off was neither observed within nor among populations. Positive association between seed size and number was found in the Atlantic Rainforest, and reduced seed crop was not accompanied by heavier seeds in the Restinga Forest. Seed dry mass declined in 2009 in all three forest types. Compared to seed number in 2008, palms of both the Restinga Forest and the Atlantic Rainforest produced in 2009 higher yields of smaller seeds - evidence of between years seed size-number trade-off -, while the Seasonally Dry Forest population produced a reduced number of smaller seeds. Such a flexible reproductive strategy, involving neutral, positive, and negative associations between seed size and number could enhance the ecological amplitude of this species and their potential to adapt to different environment conditions.

  18. Measuring happiness in large population

    Science.gov (United States)

    Wenas, Annabelle; Sjahputri, Smita; Takwin, Bagus; Primaldhi, Alfindra; Muhamad, Roby

    2016-01-01

    The ability to know emotional states for large number of people is important, for example, to ensure the effectiveness of public policies. In this study, we propose a measure of happiness that can be used in large scale population that is based on the analysis of Indonesian language lexicons. Here, we incorporate human assessment of Indonesian words, then quantify happiness on large-scale of texts gathered from twitter conversations. We used two psychological constructs to measure happiness: valence and arousal. We found that Indonesian words have tendency towards positive emotions. We also identified several happiness patterns during days of the week, hours of the day, and selected conversation topics.

  19. Gauge theory for baryon and lepton numbers with leptoquarks.

    Science.gov (United States)

    Duerr, Michael; Fileviez Pérez, Pavel; Wise, Mark B

    2013-06-07

    Models where the baryon (B) and lepton (L) numbers are local gauge symmetries that are spontaneously broken at a low scale are revisited. We find new extensions of the standard model which predict the existence of fermions that carry both baryon and lepton numbers (i.e., leptoquarks). The local baryonic and leptonic symmetries can be broken at a scale close to the electroweak scale and we do not need to postulate the existence of a large desert to satisfy the experimental constraints on baryon number violating processes like proton decay.

  20. Number Sense on the Number Line

    Science.gov (United States)

    Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni

    2018-01-01

    A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…