WorldWideScience

Sample records for comparing large numbers

  1. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  2. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  3. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  4. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  5. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  6. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  7. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  8. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  9. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  10. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  11. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  12. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  13. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  14. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  15. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  16. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  17. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  18. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  19. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  20. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  1. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  2. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  3. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  4. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  5. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  6. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  7. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  8. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  10. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  11. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  12. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  13. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  14. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  15. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  16. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  17. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  18. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  19. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  20. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  1. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  2. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  3. Boll weevil: experimental sterilization of large numbers by fractionated irradiation

    International Nuclear Information System (INIS)

    Haynes, J.W.; Wright, J.E.; Davich, T.B.; Roberson, J.; Griffin, J.G.; Darden, E.

    1978-01-01

    Boll weevils, Anthonomus grandis grandis Boheman, 9 days after egg implantation in the larval diet were transported from the Boll Weevil Research Laboratory, Mississippi State, MS, to the Comparative Animal Research Laboratory, Oak Ridge, TN, and irradiated with 6.9 krad (test 1) or 7.2 krad (test 2) of 60 Co gamma rays delivered in 25 equal doses over 100 h. In test 1, from 600 individual pairs of T (treated) males x N (normal) females, only 114 eggs hatched from a sample of 950 eggs, and 47 adults emerged from a sample of 1042 eggs. Also, from 600 pairs of T females x N males, 6 eggs hatched of a sample of 6 eggs and 12 adults emerged from a sample of 20 eggs. In test 2, from 700 individual pairs of T males x N females, 54 eggs hatched from a sample of 1510, and 10 adults emerged from a sample of 1703 eggs. Also, in T females x N males matings, 1 egg hatched of a sample of 3, and no adults emerged from a sample of 4. Transportation and handling in the 2nd test reduced adult emergence an avg of 49%. Thus the 2 replicates in test 2 resulted in 3.4 x 10 5 and 4.3 x 10 5 irradiated weevils emerging/day for 7 days. Bacterial contamination of weevils was low

  4. Comparing spatial regression to random forests for large ...

    Science.gov (United States)

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  5. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  6. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  7. Marketing Library and Information Services: Comparing Experiences at Large Institutions.

    Science.gov (United States)

    Noel, Robert; Waugh, Timothy

    This paper explores some of the similarities and differences between publicizing information services within the academic and corporate environments, comparing the marketing experiences of Abbot Laboratories (Illinois) and Indiana University. It shows some innovative online marketing tools, including an animated gif model of a large, integrated…

  8. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  9. Comparing Data Sets: Implicit Summaries of the Statistical Properties of Number Sets

    Science.gov (United States)

    Morris, Bradley J.; Masnick, Amy M.

    2015-01-01

    Comparing datasets, that is, sets of numbers in context, is a critical skill in higher order cognition. Although much is known about how people compare single numbers, little is known about how number sets are represented and compared. We investigated how subjects compared datasets that varied in their statistical properties, including ratio of…

  10. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  11. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  12. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  13. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  14. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  15. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  16. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  17. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  18. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  19. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  20. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  1. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  2. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  3. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  4. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  5. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  6. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  7. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  8. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  9. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  10. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  11. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  12. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  13. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  14. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  15. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Comparative analysis of the number of sheep in FYR and some European countries

    Directory of Open Access Journals (Sweden)

    Arsić Slavica

    2015-01-01

    Full Text Available Sheep farming in Serbia, from year to year, notices a descending course in number of sheep, as well as in production of milk and meat. The main objective of this paper is the analysis of the number of sheep in Serbia and the surrounding countries (FYR. By comparing the current state of the total number of sheep (in 2011 with the state in the former Yugoslavia, the result shown is that there are 66% less sheep in Serbia compared to the total number seen in 1967 (base year. Compared to the last census from 2012, there is an increased number of sheep in Serbia, compared to previous year (2011 by 18.4%. Other former Yugoslav republics (FYR also have a decrease in the total number of sheep: in Bosnia and Herzegovina by 76.5%, in Montenegro by 64.3%, in Croatia by 41.3%, in Macedonia by 63.5% compared to 1967 (base year, except for Slovenia, which has an increase in the total number of sheep by 83,000 head of cattle. In paper is given overview of the number of sheep for some European countries and for some part of world, in purpose of comparison with sheep state in FYR.

  17. Comparative analysis of large biomass & coal co-utilization units

    NARCIS (Netherlands)

    Liszka, M.; Nowak, G.; Ptasinski, K.J.; Favrat, D.; Marechal, F.

    2010-01-01

    The co-utilization of coal and biomass in large power units is considered in many countries (e.g. Poland) as fast and effective way of increasing renewable energy share in the fuel mix. Such a method of biomass use is especially suitable for power systems where solid fuels (hard coal, lignite) are

  18. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  19. Large-scale Comparative Sentiment Analysis of News Articles

    OpenAIRE

    Wanner, Franz; Rohrdantz, Christian; Mansmann, Florian; Stoffel, Andreas; Oelke, Daniela; Krstajic, Milos; Keim, Daniel; Luo, Dongning; Yang, Jing; Atkinson, Martin

    2009-01-01

    Online media offers great possibilities to retrieve more news items than ever. In contrast to these technical developments, human capabilities to read all these news items have not increased likewise. To bridge this gap, this poster presents a visual analytics tool for conducting semi-automatic sentiment analysis of large news feeds. The tool retrieves and analyzes the news of two categories (Terrorist Attack and Natural Disasters) and news which belong to both categories of the Europe Media ...

  20. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  1. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  2. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  3. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  4. International comparative studies of education and large scale change

    NARCIS (Netherlands)

    Howie, Sarah; Plomp, T.; Bascia, Nina; Cumming, Alister; Datnow, Amanda; Leithwood, Kenneth; Livingstone, David

    2005-01-01

    The development of international comparative studies of educational achievements dates back to the early 1960s and was made possible by developments in sample survey methodology, group testing techniques, test development, and data analysis (Husén & Tuijnman, 1994, p. 6). The studies involve

  5. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  6. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  7. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  8. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  9. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  10. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  11. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  12. Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers

    KAUST Repository

    Cheng, W.

    2017-11-27

    We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow

  13. Effective atomic numbers of some tissue substitutes by different methods: A comparative study

    Directory of Open Access Journals (Sweden)

    Vishwanath P Singh

    2014-01-01

    Full Text Available Effective atomic numbers of some human organ tissue substitutes such as polyethylene terephthalate, red articulation wax, paraffin 1, paraffin 2, bolus, pitch, polyphenylene sulfide, polysulfone, polyvinylchloride, and modeling clay have been calculated by four different methods like Auto-Z eff, direct, interpolation, and power law. It was found that the effective atomic numbers computed by Auto-Z eff , direct and interpolation methods were in good agreement for intermediate energy region (0.1 MeV < E < 5 MeV where the Compton interaction dominates. A large difference in effective atomic numbers by direct method and Auto-Z eff was observed in photo-electric and pair-production regions. Effective atomic numbers computed by power law were found to be close to direct method in photo-electric absorption region. The Auto-Z eff , direct and interpolation methods were found to be in good agreement for computation of effective atomic numbers in intermediate energy region (100 keV < E < 10 MeV. The direct method was found to be appropriate method for computation of effective atomic numbers in photo-electric region (10 keV < E < 100 keV. The tissue equivalence of the tissue substitutes is possible to represent by any method for computation of effective atomic number mentioned in the present study. An accurate estimation of Rayleigh scattering is required to eliminate effect of molecular, chemical, or crystalline environment of the atom for estimation of gamma interaction parameters.

  14. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  15. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  16. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  17. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  18. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  19. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  20. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  1. A Comparative Analysis of Numbers and Biology Content Domains between Turkey and the USA

    Science.gov (United States)

    Incikabi, Lutfi; Ozgelen, Sinan; Tjoe, Hartono

    2012-01-01

    This study aimed to compare Mathematics and Science programs focusing on TIMSS content domains of Numbers and Biology that produced the largest achievement gap among students from Turkey and the USA. Specifically, it utilized the content analysis method within Turkish and New York State (NYS) frameworks. The procedures of study included matching…

  2. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  3. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  4. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  5. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  6. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  7. Managerial span of control: a pilot study comparing departmental complexity and number of direct reports.

    Science.gov (United States)

    Merrill, Katreena Collette; Pepper, Ginette; Blegen, Mary

    2013-09-01

    Nurse managers play pivotal roles in hospitals. However, restructuring has resulted in nurse managers having wider span of control and reduced visibility. The purpose of this pilot study was to compare two methods of measuring span of control: departmental complexity and number of direct reports. Forty-one nurse managers across nine hospitals completed The Ottawa Hospital Clinical Manager Span of Control Tool (TOH-SOC) and a demographic survey. A moderate positive relationship between number of direct reports and departmental complexity score was identified (r=.49, p=managers' responsibility. Copyright © 2013 Longwoods Publishing.

  8. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  9. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  10. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  11. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Manohara, S. R.; Hanagodimath, S. M.; Gerward, L. [Department of Physics, Gulbarga University, Gulbarga, Karnataka 585 106 (India); Department of Physics, Technical University of Denmark, Lyngby DK-2800 (Denmark)

    2009-01-15

    The effective atomic numbers Z{sub eff} of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z{sub eff} is given a new meaning by using a modern database of photon interaction cross sections (WinXCom). The results of the two methods are compared and discussed. It is shown that for all biomolecules the direct method gives larger values of Z{sub eff} than the interpolation method, in particular at low energies (1-100 keV) At medium energies (0.1-5 MeV), Z{sub eff} for both methods is about constant and equal to the mean atomic number of the material. Wherever possible, the calculated values of Z{sub eff} are compared with experimental data.

  12. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    International Nuclear Information System (INIS)

    Manohara, S. R.; Hanagodimath, S. M.; Gerward, L.

    2009-01-01

    The effective atomic numbers Z eff of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z eff is given a new meaning by using a modern database of photon interaction cross sections (WinXCom). The results of the two methods are compared and discussed. It is shown that for all biomolecules the direct method gives larger values of Z eff than the interpolation method, in particular at low energies (1-100 keV) At medium energies (0.1-5 MeV), Z eff for both methods is about constant and equal to the mean atomic number of the material. Wherever possible, the calculated values of Z eff are compared with experimental data.

  13. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  14. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  15. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  16. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  17. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  18. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  19. Normal zone detectors for a large number of inductively coupled coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed

  20. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  1. Number of X-ray examinations performed on paediatric and geriatric patients compared with adult patients

    International Nuclear Information System (INIS)

    Aroua, A.; Bochud, F. O.; Valley, J. F.; Vader, J. P.; Verdun, F. R.

    2007-01-01

    The age of the patient is of prime importance when assessing the radiological risk to patients due to medical X-ray exposures and the total detriment to the population due to radiodiagnostics. In order to take into account the age-specific radiosensitivity, three age groups are considered: children, adults and the elderly. In this work, the relative number of examinations carried out on paediatric and geriatric patients is established, compared with adult patients, for radiodiagnostics as a whole, for dental and medical radiology, for 8 radiological modalities as well as for 40 types of X-ray examinations. The relative numbers of X-ray examinations are determined based on the corresponding age distributions of patients and that of the general population. Two broad groups of X-ray examinations may be defined. Group A comprises conventional radiography, fluoroscopy and computed tomography; for this group a paediatric patient undergoes half the number of examinations as that of an adult, and a geriatric patient undergoes 2.5 times more. Group B comprises angiography and interventional procedures; for this group a paediatric patient undergoes a one-fourth of the number of examinations carried out on an adult, and a geriatric patient undergoes five times more. (authors)

  2. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  3. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  4. Comparative analysis on the selection of number of clusters in community detection

    Science.gov (United States)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2018-02-01

    We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

  5. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  6. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  7. What caused a large number of fatalities in the Tohoku earthquake?

    Science.gov (United States)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  8. A High-Throughput Computational Framework for Identifying Significant Copy Number Aberrations from Array Comparative Genomic Hybridisation Data

    Directory of Open Access Journals (Sweden)

    Ian Roberts

    2012-01-01

    Full Text Available Reliable identification of copy number aberrations (CNA from comparative genomic hybridization data would be improved by the availability of a generalised method for processing large datasets. To this end, we developed swatCGH, a data analysis framework and region detection heuristic for computational grids. swatCGH analyses sequentially displaced (sliding windows of neighbouring probes and applies adaptive thresholds of varying stringency to identify the 10% of each chromosome that contains the most frequently occurring CNAs. We used the method to analyse a published dataset, comparing data preprocessed using four different DNA segmentation algorithms, and two methods for prioritising the detected CNAs. The consolidated list of the most commonly detected aberrations confirmed the value of swatCGH as a simplified high-throughput method for identifying biologically significant CNA regions of interest.

  9. An initial comparative map of copy number variations in the goat (Capra hircus genome

    Directory of Open Access Journals (Sweden)

    Casadio Rita

    2010-11-01

    Full Text Available Abstract Background The goat (Capra hircus represents one of the most important farm animal species. It is reared in all continents with an estimated world population of about 800 million of animals. Despite its importance, studies on the goat genome are still in their infancy compared to those in other farm animal species. Comparative mapping between cattle and goat showed only a few rearrangements in agreement with the similarity of chromosome banding. We carried out a cross species cattle-goat array comparative genome hybridization (aCGH experiment in order to identify copy number variations (CNVs in the goat genome analysing animals of different breeds (Saanen, Camosciata delle Alpi, Girgentana, and Murciano-Granadina using a tiling oligonucleotide array with ~385,000 probes designed on the bovine genome. Results We identified a total of 161 CNVs (an average of 17.9 CNVs per goat, with the largest number in the Saanen breed and the lowest in the Camosciata delle Alpi goat. By aggregating overlapping CNVs identified in different animals we determined CNV regions (CNVRs: on the whole, we identified 127 CNVRs covering about 11.47 Mb of the virtual goat genome referred to the bovine genome (0.435% of the latter genome. These 127 CNVRs included 86 loss and 41 gain and ranged from about 24 kb to about 1.07 Mb with a mean and median equal to 90,292 bp and 49,530 bp, respectively. To evaluate whether the identified goat CNVRs overlap with those reported in the cattle genome, we compared our results with those obtained in four independent cattle experiments. Overlapping between goat and cattle CNVRs was highly significant (P Conclusions We describe a first map of goat CNVRs. This provides information on a comparative basis with the cattle genome by identifying putative recurrent interspecies CNVs between these two ruminant species. Several goat CNVs affect genes with important biological functions. Further studies are needed to evaluate the

  10. Comparative analyses of the neuron numbers and volumes of the amygdaloid complex in old and new world primates.

    Science.gov (United States)

    Carlo, C N; Stefanacci, L; Semendeferi, K; Stevens, C F

    2010-04-15

    The amygdaloid complex (AC), a key component of the limbic system, is a brain region critical for the detection and interpretation of emotionally salient information. Therefore, changes in its structure and function are likely to provide correlates of mood and emotion disorders, diseases that afflict a large portion of the human population. Previous gross comparisons of the AC in control and diseased individuals have, however, mainly failed to discover these expected correlations with diseases. We have characterized AC nuclei in different nonhuman primate species to establish a baseline for more refined comparisons between the normal and the diseased amygdala. AC nuclei volume and neuron number in 19 subdivisions are reported from 13 Old and New World primate brains, spanning five primate species, and compared with corresponding data from humans. Analysis of the four largest AC nuclei revealed that volume and neuron number of one component, the central nucleus, has a negative allometric relationship with total amygdala volume and neuron number, which is in contrast with the isometric relationship found in the other AC nuclei (for both neuron number and volume). Neuron density decreases across all four nuclei according to a single power law with an exponent of about minus one-half. Because we have included quantitative comparisons with great apes and humans, our conclusions apply to human brains, and our scaling laws can potentially be used to study the anatomical correlates of the amygdala in disorders involving pathological emotion processing. (c) 2009 Wiley-Liss, Inc.

  11. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  12. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  13. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  14. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  15. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  16. How comparable are size-resolved particle number concentrations from different instruments?

    Science.gov (United States)

    Hornsby, K. E.; Pryor, S. C.

    2012-12-01

    The need for comparability of particle size resolved measurements originates from multiple drivers including: (i) Recent suggestions that air quality standards for particulate matter should migrate from being mass-based to incorporating number concentrations. This move would necessarily be predicated on measurement comparability which is absolutely critical to compliance determination. (ii) The need to quantify and diagnose causes of variability in nucleation and growth rates in nano-particle experiments conducted in different locations. (iii) Epidemiological research designed to identify key parameters in human health responses to fine particle exposure. Here we present results from a detailed controlled laboratory instrument inter-comparison experiment designed to investigate data comparability in the size range of 2.01-523.3 nm across a range of particle composition, modal diameter and absolute concentration. Particle size distributions were generated using a TSI model 3940 Aerosol Generation System (AGS) diluted using zero air, and sampled using four TSI Scanning Mobility Particle Spectrometer (SMPS) configurations and a TSI model 3091 Fast Mobility Particle Sizer (FMPS). The SMPS configurations used two Electrostatic Classifiers (EC) (model 3080) attached to either a Long DMA (LDMA) (model 3081) or a Nano DMA (NDMA) (model 3085) plumbed to either a TSI model 3025A Butanol Condensed Particle Counting (CPC) or a TSI model 3788 Water CPC. All four systems were run using both high and low flow conditions, and were operated with both the internal diffusion loss and multiple charge corrections turned on. The particle compositions tested were sodium chloride, ammonium nitrate and olive oil diluted in ethanol. Particles of all three were generated at three peak concentration levels (spanning the range observed at our experimental site), and three modal particle diameters. Experimental conditions were maintained for a period of 20 minutes to ensure experimental

  17. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  18. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  19. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  20. Linear optics and projective measurements alone suffice to create large-photon-number path entanglement

    International Nuclear Information System (INIS)

    Lee, Hwang; Kok, Pieter; Dowling, Jonathan P.; Cerf, Nicolas J.

    2002-01-01

    We propose a method for preparing maximal path entanglement with a definite photon-number N, larger than two, using projective measurements. In contrast with the previously known schemes, our method uses only linear optics. Specifically, we exhibit a way of generating four-photon, path-entangled states of the form vertical bar 4,0>+ vertical bar 0,4>, using only four beam splitters and two detectors. These states are of major interest as a resource for quantum interferometric sensors as well as for optical quantum lithography and quantum holography

  1. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  2. Navigating the complexities of qualitative comparative analysis: case numbers, necessity relations, and model ambiguities.

    Science.gov (United States)

    Thiem, Alrik

    2014-12-01

    In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.

  3. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  4. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  5. Application of Evolution Strategies to the Design of Tracking Filters with a Large Number of Specifications

    Directory of Open Access Journals (Sweden)

    Jesús García Herrero

    2003-07-01

    Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.

  6. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  7. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  8. Medicine in words and numbers: a cross-sectional survey comparing probability assessment scales

    Directory of Open Access Journals (Sweden)

    Koele Pieter

    2007-06-01

    Full Text Available Abstract Background In the complex domain of medical decision making, reasoning under uncertainty can benefit from supporting tools. Automated decision support tools often build upon mathematical models, such as Bayesian networks. These networks require probabilities which often have to be assessed by experts in the domain of application. Probability response scales can be used to support the assessment process. We compare assessments obtained with different types of response scale. Methods General practitioners (GPs gave assessments on and preferences for three different probability response scales: a numerical scale, a scale with only verbal labels, and a combined verbal-numerical scale we had designed ourselves. Standard analyses of variance were performed. Results No differences in assessments over the three response scales were found. Preferences for type of scale differed: the less experienced GPs preferred the verbal scale, the most experienced preferred the numerical scale, with the groups in between having a preference for the combined verbal-numerical scale. Conclusion We conclude that all three response scales are equally suitable for supporting probability assessment. The combined verbal-numerical scale is a good choice for aiding the process, since it offers numerical labels to those who prefer numbers and verbal labels to those who prefer words, and accommodates both more and less experienced professionals.

  9. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  10. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  11. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  12. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  13. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  14. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  15. KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    ENEA CONSTANTA

    2016-06-01

    Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism

  16. Secondary organic aerosol formation from a large number of reactive man-made organic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)

    2010-07-15

    A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.

  17. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  18. Large number of ultraconserved elements were already present in the jawed vertebrate ancestor.

    KAUST Repository

    Wang, Jianli; Lee, Alison P; Kodzius, Rimantas; Brenner, Sydney; Venkatesh, Byrappa

    2009-01-01

    Stephen (2008) identified 13,736 ultraconserved elements (UCEs) in placental mammals and investigated their evolution in opossum, chicken, frog, and fugu. They found that there was a massive expansion of UCEs during tetrapod evolution and the substitution rate in UCEs showed a significant decline in tetrapods compared with fugu, suggesting they were exapted in tetrapods. They considered it unlikely that these elements are ancient but evolved at a higher rate in teleost fishes. In this study, we investigated the evolution of UCEs in a cartilaginous fish, the elephant shark and show that nearly half the UCEs were present in the jawed vertebrate ancestor. The substitution rate in UCEs is higher in fugu than in elephant shark, and approximately one-third of ancient UCEs have diverged beyond recognition in teleost fishes. These data indicate that UCEs have evolved at a higher rate in teleost fishes, which may have implications for their vast diversity and evolutionary success.

  19. Large number of ultraconserved elements were already present in the jawed vertebrate ancestor.

    KAUST Repository

    Wang, Jianli

    2009-03-01

    Stephen (2008) identified 13,736 ultraconserved elements (UCEs) in placental mammals and investigated their evolution in opossum, chicken, frog, and fugu. They found that there was a massive expansion of UCEs during tetrapod evolution and the substitution rate in UCEs showed a significant decline in tetrapods compared with fugu, suggesting they were exapted in tetrapods. They considered it unlikely that these elements are ancient but evolved at a higher rate in teleost fishes. In this study, we investigated the evolution of UCEs in a cartilaginous fish, the elephant shark and show that nearly half the UCEs were present in the jawed vertebrate ancestor. The substitution rate in UCEs is higher in fugu than in elephant shark, and approximately one-third of ancient UCEs have diverged beyond recognition in teleost fishes. These data indicate that UCEs have evolved at a higher rate in teleost fishes, which may have implications for their vast diversity and evolutionary success.

  20. Beating the numbers through strategic intervention materials (SIMs): Innovative science teaching for large classes

    Science.gov (United States)

    Alboruto, Venus M.

    2017-05-01

    The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.

  1. Detection of large numbers of novel sequences in the metatranscriptomes of complex marine microbial communities.

    Science.gov (United States)

    Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian

    2008-08-22

    Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.

  2. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  3. Trading volume and the number of trades : a comparative study using high frequency data

    OpenAIRE

    Izzeldin, Marwan

    2007-01-01

    Trading volume and the number of trades are both used as proxies for market activity, with disagreement as to which is the better proxy for market activity. This paper investigates this issue using high frequency data for Cisco and Intel in 1997. A number of econometric methods are used, including GARCH augmented with lagged trading volume and number of trades, tests based on moment restrictions, regression analysis of volatility on volume and trades, normality of returns when standardized by...

  4. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    Science.gov (United States)

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  5. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  6. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  7. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  8. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  9. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  10. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  11. A comparative study of near-wall turbulence in high and low Reynolds number boundary layers

    International Nuclear Information System (INIS)

    Metzger, M.M.; Klewicki, J.C.

    2001-01-01

    The present study explores the effects of Reynolds number, over three orders of magnitude, in the viscous wall region of a turbulent boundary layer. Complementary experiments were conducted both in the boundary layer wind tunnel at the University of Utah and in the atmospheric surface layer which flows over the salt flats of the Great Salt Lake Desert in western Utah. The Reynolds numbers, based on momentum deficit thickness, of the two flows were R θ =2x10 3 and R θ ≅5x10 6 , respectively. High-resolution velocity measurements were obtained from a five-element vertical rake of hot-wires spanning the buffer region. In both the low and high R θ flows, the length of the hot-wires measured less than 6 viscous units. To facilitate reliable comparisons, both the laboratory and field experiments employed the same instrumentation and procedures. Data indicate that, even in the immediate vicinity of the surface, strong influences from low-frequency motions at high R θ produce noticeable Reynolds number differences in the streamwise velocity and velocity gradient statistics. In particular, the peak value in the root mean square streamwise velocity profile, when normalized by viscous scales, was found to exhibit a logarithmic dependence on Reynolds number. The mean streamwise velocity profile, on the other hand, appears to be essentially independent of Reynolds number. Spectra and spatial correlation data suggest that low-frequency motions at high Reynolds number engender intensified local convection velocities which affect the structure of both the velocity and velocity gradient fields. Implications for turbulent production mechanisms and coherent motions in the buffer layer are discussed

  12. The effective atomic numbers of some biomolecules calculated by two methods: A comparative study

    DEFF Research Database (Denmark)

    Manohara, S.R.; Hanagodimath, S.M.; Gerward, Leif

    2009-01-01

    The effective atomic numbers Z(eff) of some fatty acids and amino acids have been calculated by two numerical methods, a direct method and an interpolation method, in the energy range of 1 keV-20 MeV. The notion of Z(eff) is given a new meaning by using a modern database of photon interaction cro...

  13. Regime shifts in demersal assemblages of the Benguela Current Large Marine Ecosystem: a comparative assessment

    DEFF Research Database (Denmark)

    Kirkman, Stephen P.; Yemane, Dawit; Atkinson, Lara J.

    2015-01-01

    Using long‐term survey data, changes in demersal faunal communities in the Benguela Current Large Marine Ecosystem were analysed at community and population levels to provide a comparative overview of the occurrence and timing of regime shifts. For South Africa, the timing of a community‐level sh......Using long‐term survey data, changes in demersal faunal communities in the Benguela Current Large Marine Ecosystem were analysed at community and population levels to provide a comparative overview of the occurrence and timing of regime shifts. For South Africa, the timing of a community...

  14. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    Science.gov (United States)

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Comparative guide to emerging diagnostic tools for large commercial HVAC systems

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, Hannah; Piette, Mary Ann

    2001-05-01

    This guide compares emerging diagnostic software tools that aid detection and diagnosis of operational problems for large HVAC systems. We have evaluated six tools for use with energy management control system (EMCS) or other monitoring data. The diagnostic tools summarize relevant performance metrics, display plots for manual analysis, and perform automated diagnostic procedures. Our comparative analysis presents nine summary tables with supporting explanatory text and includes sample diagnostic screens for each tool.

  16. Comparative guide to emerging diagnostic tools for large commercial HVAC systems; TOPICAL

    International Nuclear Information System (INIS)

    Friedman, Hannah; Piette, Mary Ann

    2001-01-01

    This guide compares emerging diagnostic software tools that aid detection and diagnosis of operational problems for large HVAC systems. We have evaluated six tools for use with energy management control system (EMCS) or other monitoring data. The diagnostic tools summarize relevant performance metrics, display plots for manual analysis, and perform automated diagnostic procedures. Our comparative analysis presents nine summary tables with supporting explanatory text and includes sample diagnostic screens for each tool

  17. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  18. Simulation study comparing the helmet-chin PET with a cylindrical PET of the same number of detectors

    Science.gov (United States)

    Ahmed, Abdella M.; Tashima, Hideaki; Yoshida, Eiji; Nishikido, Fumihiko; Yamaya, Taiga

    2017-06-01

    There is a growing interest in developing brain PET scanners with high sensitivity and high spatial resolution for early diagnosis of neurodegenerative diseases and studies of brain functions. Sensitivity of the PET scanner can be improved by increasing the solid angle. However, conventional PET scanners are designed based on a cylindrical geometry, which may not be the most efficient design for brain imaging in terms of the balance between sensitivity and cost. We proposed a dedicated brain PET scanner based on a hemispheric shape detector and a chin detector (referred to as the helmet-chin PET), which is designed to maximize the solid angle by increasing the number of lines-of-response in the hemisphere. The parallax error, which PET scanners with a large solid angle tend to have, can be suppressed by the use of depth-of-interaction detectors. In this study, we carry out a realistic evaluation of the helmet-chin PET using Monte Carlo simulation based on the 4-layer GSO detector which consists of a 16  ×  16  ×  4 array of crystals with dimensions of 2.8  ×  2.8  ×  7.5 mm3. The purpose of this simulation is to show the gain in imaging performance of the helmet-chin PET compared with the cylindrical PET using the same number of detectors in each configuration. The sensitivity of the helmet-chin PET evaluated with a cylindrical phantom has a significant increase, especially at the top of the (field-of-view) FOV. The peak-NECR of the helmet-chin PET is 1.4 times higher compared to the cylindrical PET. The helmet-chin PET provides relatively low noise images throughout the FOV compared to the cylindrical PET which exhibits enhanced noise at the peripheral regions. The results show the helmet-chin PET can significantly improve the sensitivity and reduce the noise in the reconstructed images.

  19. Effective atomic numbers (Z_e_f_f) of based calcium phosphate biomaterials: a comparative study

    International Nuclear Information System (INIS)

    Fernandes Zenobio, Madelon Aparecida; Gonçalves Zenobio, Elton; Silva, Teógenes Augusto da; Socorro Nogueira, Maria do

    2016-01-01

    This study determined the interaction of radiation parameters of four biomaterials as attenuators to measure the transmitted X-rays spectra, the mass attenuation coefficient and the effective atomic number by spectrometric system comprising the CdTe detector. The biomaterial BioOss"® presented smaller mean energy than the other biomaterials. The μ/ρ and Z_e_f_f of the biomaterials showed their dependence on photon energy. The data obtained from analytical methods of x-ray spectra, µ/ρ and Z_e_f_f_, using biomaterials as attenuators, demonstrated that these materials could be used as substitutes for dentin, enamel and bone. Further, they are determinants for the characterization of the radiation in tissues or equivalent materials. - Highlights: • Measure of the transmitted x-rays spectra using based calcium phosphate biomaterials as attenuators. • Determination effective atomic number using four dental biomaterials. • Determination of the mass attenuation coefficient (µ/ρ) of the biomaterials samples calculated by the WinXCOM software. • Determination of the chemical composition of calcium phosphate biomaterials.

  20. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  1. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  2. Comparative study of measured and modelled number concentrations of nanoparticles in an urban street canyon

    DEFF Research Database (Denmark)

    Kumar, Prashant; Garmory, Andrew; Ketzel, Matthias

    2009-01-01

    Pollution Model (OSPM) and Computational Fluid Dynamics (CFD) code FLUENT. All models disregarded any particle dynamics. CFD simulations have been carried out in a simplified geometry of the selected street canyon. Four different sizes of emission sources have been used in the CFD simulations to assess......This study presents a comparison between measured and modelled particle number concentrations (PNCs) in the 10-300 nm size range at different heights in a canyon. The PNCs were modelled using a simple modelling approach (modified Box model, including vertical variation), an Operational Street...... the effect of source size on mean PNC distributions in the street canyon. The measured PNCs were between a factor of two and three of those from the three models, suggesting that if the model inputs are chosen carefully, even a simplified approach can predict the PNCs as well as more complex models. CFD...

  3. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Directory of Open Access Journals (Sweden)

    Batyaev V.F.

    2017-01-01

    Full Text Available The analysis of various non-destructive methods to control fissile materials (FM in large-size containers filled with radioactive waste (RAW has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one.

  4. Comparative analysis of non-destructive methods to control fissile materials in large-size containers

    Science.gov (United States)

    Batyaev, V. F.; Sklyarov, S. V.

    2017-09-01

    The analysis of various non-destructive methods to control fissile materials (FM) in large-size containers filled with radioactive waste (RAW) has been carried out. The difficulty of applying passive gamma-neutron monitoring FM in large containers filled with concreted RAW is shown. Selection of an active non-destructive assay technique depends on the container contents; and in case of a concrete or iron matrix with very low activity and low activity RAW the neutron radiation method appears to be more preferable as compared with the photonuclear one. Note to the reader: the pdf file has been changed on September 22, 2017.

  5. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.

  6. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    International Nuclear Information System (INIS)

    Chiu, J; Ma, L

    2015-01-01

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume

  7. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  8. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  9. Comparative Study of Fatigue Damage Models Using Different Number of Classes Combined with the Rainflow Method

    Directory of Open Access Journals (Sweden)

    S. Zengah

    2013-06-01

    Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.

  10. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  11. Copy number alterations in small intestinal neuroendocrine tumors determined by array comparative genomic hybridization

    International Nuclear Information System (INIS)

    Hashemi, Jamileh; Fotouhi, Omid; Sulaiman, Luqman; Kjellman, Magnus; Höög, Anders; Zedenius, Jan; Larsson, Catharina

    2013-01-01

    Small intestinal neuroendocrine tumors (SI-NETs) are typically slow-growing tumors that have metastasized already at the time of diagnosis. The purpose of the present study was to further refine and define regions of recurrent copy number (CN) alterations (CNA) in SI-NETs. Genome-wide CNAs was determined by applying array CGH (a-CGH) on SI-NETs including 18 primary tumors and 12 metastases. Quantitative PCR analysis (qPCR) was used to confirm CNAs detected by a-CGH as well as to detect CNAs in an extended panel of SI-NETs. Unsupervised hierarchical clustering was used to detect tumor groups with similar patterns of chromosomal alterations based on recurrent regions of CN loss or gain. The log rank test was used to calculate overall survival. Mann–Whitney U test or Fisher’s exact test were used to evaluate associations between tumor groups and recurrent CNAs or clinical parameters. The most frequent abnormality was loss of chromosome 18 observed in 70% of the cases. CN losses were also frequently found of chromosomes 11 (23%), 16 (20%), and 9 (20%), with regions of recurrent CN loss identified in 11q23.1-qter, 16q12.2-qter, 9pter-p13.2 and 9p13.1-11.2. Gains were most frequently detected in chromosomes 14 (43%), 20 (37%), 4 (27%), and 5 (23%) with recurrent regions of CN gain located to 14q11.2, 14q32.2-32.31, 20pter-p11.21, 20q11.1-11.21, 20q12-qter, 4 and 5. qPCR analysis confirmed most CNAs detected by a-CGH as well as revealed CNAs in an extended panel of SI-NETs. Unsupervised hierarchical clustering of recurrent regions of CNAs revealed two separate tumor groups and 5 chromosomal clusters. Loss of chromosomes 18, 16 and 11 and again of chromosome 20 were found in both tumor groups. Tumor group II was enriched for alterations in chromosome cluster-d, including gain of chromosomes 4, 5, 7, 14 and gain of 20 in chromosome cluster-b. Gain in 20pter-p11.21 was associated with short survival. Statistically significant differences were observed between primary

  12. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  13. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  14. A framework for comparative evaluation of dosimetric methods to triage a large population following a radiological event

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Nicolalde, Roberto J.; Demidenko, Eugene; Williams, Benjamin B.; Shapiro, Alla; Wiley, Albert L.; Swartz, Harold M.

    2011-01-01

    Background: To prepare for a possible major radiation disaster involving large numbers of potentially exposed people, it is important to be able to rapidly and accurately triage people for treatment or not, factoring in the likely conditions and available resources. To date, planners have had to create guidelines for triage based on methods for estimating dose that are clinically available and which use evidence extrapolated from unrelated conditions. Current guidelines consequently focus on measuring clinical symptoms (e.g., time-to-vomiting), which may not be subject to the same verification of standard methods and validation processes required for governmental approval processes of new and modified procedures. Biodosimeters under development have not yet been formally approved for this use. Neither set of methods has been tested in settings involving large-scale populations at risk for exposure. Objective: To propose a framework for comparative evaluation of methods for such triage and to evaluate biodosimetric methods that are currently recommended and new methods as they are developed. Methods: We adapt the NIH model of scientific evaluations and sciences needed for effective translational research to apply to biodosimetry for triaging very large populations following a radiation event. We detail criteria for translating basic science about dosimetry into effective multi-stage triage of large populations and illustrate it by analyzing 3 current guidelines and 3 advanced methods for biodosimetry. Conclusions: This framework for evaluating dosimetry in large populations is a useful technique to compare the strengths and weaknesses of different dosimetry methods. It can help policy-makers and planners not only to compare the methods' strengths and weaknesses for their intended use but also to develop an integrated approach to maximize their effectiveness. It also reveals weaknesses in methods that would benefit from further research and evaluation.

  15. A framework for comparative evaluation of dosimetric methods to triage a large population following a radiological event

    Energy Technology Data Exchange (ETDEWEB)

    Flood, Ann Barry, E-mail: Ann.B.Flood@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Nicolalde, Roberto J., E-mail: Roberto.J.Nicolalde@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Demidenko, Eugene, E-mail: Eugene.Demidenko@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Williams, Benjamin B., E-mail: Benjamin.B.Williams@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States); Shapiro, Alla, E-mail: Alla.Shapiro@fda.hhs.gov [Food and Drug Administration (FDA), Rockville, MD (United States); Wiley, Albert L., E-mail: Albert.Wiley@orise.orau.gov [Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN (United States); Swartz, Harold M., E-mail: Harold.M.Swartz@Dartmouth.Edu [Dartmouth Physically Based Biodosimetry Center for Medical Countermeasures Against Radiation (Dart-Dose CMCR), Dartmouth Medical School, Hanover, NH 03768 (United States)

    2011-09-15

    Background: To prepare for a possible major radiation disaster involving large numbers of potentially exposed people, it is important to be able to rapidly and accurately triage people for treatment or not, factoring in the likely conditions and available resources. To date, planners have had to create guidelines for triage based on methods for estimating dose that are clinically available and which use evidence extrapolated from unrelated conditions. Current guidelines consequently focus on measuring clinical symptoms (e.g., time-to-vomiting), which may not be subject to the same verification of standard methods and validation processes required for governmental approval processes of new and modified procedures. Biodosimeters under development have not yet been formally approved for this use. Neither set of methods has been tested in settings involving large-scale populations at risk for exposure. Objective: To propose a framework for comparative evaluation of methods for such triage and to evaluate biodosimetric methods that are currently recommended and new methods as they are developed. Methods: We adapt the NIH model of scientific evaluations and sciences needed for effective translational research to apply to biodosimetry for triaging very large populations following a radiation event. We detail criteria for translating basic science about dosimetry into effective multi-stage triage of large populations and illustrate it by analyzing 3 current guidelines and 3 advanced methods for biodosimetry. Conclusions: This framework for evaluating dosimetry in large populations is a useful technique to compare the strengths and weaknesses of different dosimetry methods. It can help policy-makers and planners not only to compare the methods' strengths and weaknesses for their intended use but also to develop an integrated approach to maximize their effectiveness. It also reveals weaknesses in methods that would benefit from further research and evaluation.

  16. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  17. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  18. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  19. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  20. Comparative Assessments of the Seasonality in "The Total Number of Overnight Stays" in Romania, Bulgaria and the European Union

    Directory of Open Access Journals (Sweden)

    Jugănaru Ion Dănuț

    2017-01-01

    For the quantitative research carried out in this study, we processed a database consisting of the monthly values of “the total number of overnight stays” indicator, recorded between January 2005 and December 2016, using the moving average method, the seasonality coefficient and EViews 5. The results led to the formulation of comparative assessments regarding the seasonality in the tourism activities from Romania and Bulgaria and their situation compared to the average of the seasonality recorded in the EU.

  1. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  2. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  3. A comparative analysis of the statistical properties of large mobile phone calling networks.

    Science.gov (United States)

    Li, Ming-Xia; Jiang, Zhi-Qiang; Xie, Wen-Jie; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N

    2014-05-30

    Mobile phone calling is one of the most widely used communication methods in modern society. The records of calls among mobile phone users provide us a valuable proxy for the understanding of human communication patterns embedded in social networks. Mobile phone users call each other forming a directed calling network. If only reciprocal calls are considered, we obtain an undirected mutual calling network. The preferential communication behavior between two connected users can be statistically tested and it results in two Bonferroni networks with statistically validated edges. We perform a comparative analysis of the statistical properties of these four networks, which are constructed from the calling records of more than nine million individuals in Shanghai over a period of 110 days. We find that these networks share many common structural properties and also exhibit idiosyncratic features when compared with previously studied large mobile calling networks. The empirical findings provide us an intriguing picture of a representative large social network that might shed new lights on the modelling of large social networks.

  4. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  5. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  6. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  7. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  8. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  9. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  10. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  11. Comparative study of 6 MV and 15 MV treatment plans for large chest wall irradiation

    International Nuclear Information System (INIS)

    Prasana Sarathy, N.; Kothanda Raman, S.; Sen, Dibyendu; Pal, Bipasha

    2007-01-01

    Conventionally, opposed tangential fields are used for the treatment of chest wall irradiation. If the chest wall is treated in the linac, 4 or 6 MV photons will be the energy of choice. It is a welI-established rule that for chest wall separations up to 22 cm, one can use mid-energies, with acceptable volume of hot spots. For larger patient sizes (22 cm and above), mid-energy beams produce hot spots over large volumes. The purpose of this work is to compare plans made with 6 and 15 MV photons, for patients with large chest wall separations. The obvious disadvantage in using high-energy photons for chest wall irradiation is inadequate dose to the skin. But this can be compensated by using a bolus of suitable thickness

  12. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    Science.gov (United States)

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  13. Comparing large covariance matrices under weak conditions on the dependence structure and its application to gene clustering.

    Science.gov (United States)

    Chang, Jinyuan; Zhou, Wen; Zhou, Wen-Xin; Wang, Lan

    2017-03-01

    Comparing large covariance matrices has important applications in modern genomics, where scientists are often interested in understanding whether relationships (e.g., dependencies or co-regulations) among a large number of genes vary between different biological states. We propose a computationally fast procedure for testing the equality of two large covariance matrices when the dimensions of the covariance matrices are much larger than the sample sizes. A distinguishing feature of the new procedure is that it imposes no structural assumptions on the unknown covariance matrices. Hence, the test is robust with respect to various complex dependence structures that frequently arise in genomics. We prove that the proposed procedure is asymptotically valid under weak moment conditions. As an interesting application, we derive a new gene clustering algorithm which shares the same nice property of avoiding restrictive structural assumptions for high-dimensional genomics data. Using an asthma gene expression dataset, we illustrate how the new test helps compare the covariance matrices of the genes across different gene sets/pathways between the disease group and the control group, and how the gene clustering algorithm provides new insights on the way gene clustering patterns differ between the two groups. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2016, The International Biometric Society.

  14. Comparing direct and iterative equation solvers in a large structural analysis software system

    Science.gov (United States)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  15. A Comparative Study of Four Methods for the Detection of Nematode Eggs and Large Protozoan Cysts in Mandrill Faecal Material.

    Science.gov (United States)

    Pouillevet, Hanae; Dibakou, Serge-Ely; Ngoubangoye, Barthélémy; Poirotte, Clémence; Charpentier, Marie J E

    2017-01-01

    Coproscopical methods like sedimentation and flotation techniques are widely used in the field for studying simian gastrointestinal parasites. Four parasites of known zoonotic potential were studied in a free-ranging, non-provisioned population of mandrills (Mandrillus sphinx): 2 nematodes (Necatoramericanus/Oesophagostomum sp. complex and Strongyloides sp.) and 2 protozoan species (Balantidium coli and Entamoeba coli). Different coproscopical techniques are available but they are rarely compared to evaluate their efficiency to retrieve parasites. In this study 4 different field-friendly methods were compared. A sedimentation method and 3 different McMaster methods (using sugar, salt, and zinc sulphate solutions) were performed on 47 faecal samples collected from different individuals of both sexes and all ages. First, we show that McMaster flotation methods are appropriate to detect and thus quantify large protozoan cysts. Second, zinc sulphate McMaster flotation allows the retrieval of a higher number of parasite taxa compared to the other 3 methods. This method further shows the highest probability to detect each of the studied parasite taxa. Altogether our results show that zinc sulphate McMaster flotation appears to be the best technique to use when studying nematodes and large protozoa. © 2017 S. Karger AG, Basel.

  16. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    Directory of Open Access Journals (Sweden)

    Débora Jardim-Messeder

    2017-12-01

    Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal

  17. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  18. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  19. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  20. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  1. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  2. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  3. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  4. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  5. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  6. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  7. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey

  8. Placement of endosseous implant in infected alveolar socket with large fenestration defect: A comparative case report

    Directory of Open Access Journals (Sweden)

    Balaji Anitha

    2010-01-01

    Full Text Available Placement of endosseous implants into infected bone is often deferred or avoided due to fear of failure. However, with the development of guided bone regeneration [GBR], some implantologists have reported successful implant placement in infected sockets, even those with fenestration defects. We had the opportunity to compare the osseointegration of an immediate implant placed in an infected site associated with a large buccal fenestration created by the removal of a root stump with that of a delayed implant placed 5 years after extraction. Both implants were placed in the same patient, in the same dental quadrant by the same implantologist. GBR was used with the fenestration defect being filled with demineralized bone graftFNx01 and covered with collagen membraneFNx08. Both implants were osseointegrated and functional when followed up after 12 months.

  9. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  10. Challenges and opportunities in coding the commons: problems, procedures, and potential solutions in large-N comparative case studies

    Directory of Open Access Journals (Sweden)

    Elicia Ratajczyk

    2016-09-01

    Full Text Available On-going efforts to understand the dynamics of coupled social-ecological (or more broadly, coupled infrastructure systems and common pool resources have led to the generation of numerous datasets based on a large number of case studies. This data has facilitated the identification of important factors and fundamental principles which increase our understanding of such complex systems. However, the data at our disposal are often not easily comparable, have limited scope and scale, and are based on disparate underlying frameworks inhibiting synthesis, meta-analysis, and the validation of findings. Research efforts are further hampered when case inclusion criteria, variable definitions, coding schema, and inter-coder reliability testing are not made explicit in the presentation of research and shared among the research community. This paper first outlines challenges experienced by researchers engaged in a large-scale coding project; then highlights valuable lessons learned; and finally discusses opportunities for further research on comparative case study analysis focusing on social-ecological systems and common pool resources.

  11. Primary care COPD patients compared with large pharmaceutically-sponsored COPD studies: an UNLOCK validation study.

    Directory of Open Access Journals (Sweden)

    Annemarije L Kruis

    Full Text Available BACKGROUND: Guideline recommendations for chronic obstructive pulmonary disease (COPD are based on the results of large pharmaceutically-sponsored COPD studies (LPCS. There is a paucity of data on disease characteristics at the primary care level, while the majority of COPD patients are treated in primary care. OBJECTIVE: We aimed to evaluate the external validity of six LPCS (ISOLDE, TRISTAN, TORCH, UPLIFT, ECLIPSE, POET-COPD on which current guidelines are based, in relation to primary care COPD patients, in order to inform future clinical practice guidelines and trials. METHODS: Baseline data of seven primary care databases (n=3508 from Europe were compared to baseline data of the LPCS. In addition, we examined the proportion of primary care patients eligible to participate in the LPCS, based on inclusion criteria. RESULTS: Overall, patients included in the LPCS were younger (mean difference (MD-2.4; p=0.03, predominantly male (MD 12.4; p=0.1 with worse lung function (FEV1% MD -16.4; p<0.01 and worse quality of life scores (SGRQ MD 15.8; p=0.01. There were large differences in GOLD stage distribution compared to primary care patients. Mean exacerbation rates were higher in LPCS, with an overrepresentation of patients with ≥ 1 and ≥ 2 exacerbations, although results were not statistically significant. Our findings add to the literature, as we revealed hitherto unknown GOLD I exacerbation characteristics, showing 34% of mild patients had ≥ 1 exacerbations per year and 12% had ≥ 2 exacerbations per year. The proportion of primary care patients eligible for inclusion in LPCS ranged from 17% (TRISTAN to 42% (ECLIPSE, UPLIFT. CONCLUSION: Primary care COPD patients stand out from patients enrolled in LPCS in terms of gender, lung function, quality of life and exacerbations. More research is needed to determine the effect of pharmacological treatment in mild to moderate patients. We encourage future guideline makers to involve primary care

  12. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  13. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  14. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  15. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    Science.gov (United States)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  16. A comparative study of the number and mass of fine particles emitted with diesel fuel and marine gas oil (MGO)

    Science.gov (United States)

    Nabi, Md. Nurun; Brown, Richard J.; Ristovski, Zoran; Hustad, Johan Einar

    2012-09-01

    The current investigation reports on diesel particulate matter emissions, with special interest in fine particles from the combustion of two base fuels. The base fuels selected were diesel fuel and marine gas oil (MGO). The experiments were conducted with a four-stroke, six-cylinder, direct injection diesel engine. The results showed that the fine particle number emissions measured by both SMPS and ELPI were higher with MGO compared to diesel fuel. It was observed that the fine particle number emissions with the two base fuels were quantitatively different but qualitatively similar. The gravimetric (mass basis) measurement also showed higher total particulate matter (TPM) emissions with the MGO. The smoke emissions, which were part of TPM, were also higher for the MGO. No significant changes in the mass flow rate of fuel and the brake-specific fuel consumption (BSFC) were observed between the two base fuels.

  17. Comparative performance of modern digital mammography systems in a large breast screening program

    Energy Technology Data Exchange (ETDEWEB)

    Yaffe, Martin J., E-mail: martin.yaffe@sri.utoronto.ca; Bloomquist, Aili K.; Hunter, David M.; Mawdsley, Gordon E. [Physical Sciences Division, Sunnybrook Research Institute, Departments of Medical Biophysics and Medical Imaging, University of Toronto, Ontario M4N 3M5 (Canada); Chiarelli, Anna M. [Prevention and Cancer Control, Cancer Care Ontario, Dalla Lana School of Public Health, University of Toronto, Ontario M4N 3M5, Canada and Ontario Breast Screening Program, Cancer Care Ontario, Toronto, Ontario M5G 1X3 (Canada); Muradali, Derek [Ontario Breast Screening Program, Cancer Care Ontario, Toronto, Ontario M5G 1X3 (Canada); Mainprize, James G. [Physical Sciences Division, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5 (Canada)

    2013-12-15

    Purpose: To compare physical measures pertaining to image quality among digital mammography systems utilized in a large breast screening program. To examine qualitatively differences in these measures and differences in clinical cancer detection rates between CR and DR among sites within that program. Methods: As part of the routine quality assurance program for screening, field measurements are made of several variables considered to correlate with the diagnostic quality of medical images including: modulation transfer function, noise equivalent quanta, d′ (an index of lesion detectability) and air kerma to allow estimation of mean glandular dose. In addition, images of the mammography accreditation phantom are evaluated. Results: It was found that overall there were marked differences between the performance measures of DR and CR mammography systems. In particular, the modulation transfer functions obtained with the DR systems were found to be higher, even for larger detector element sizes. Similarly, the noise equivalent quanta, d′, and the phantom scores were higher, while the failure rates associated with low signal-to-noise ratio and high dose were lower with DR. These results were consistent with previous findings in the authors’ program that the breast cancer detection rates at sites employing CR technology were, on average, 30.6% lower than those that used DR mammography. Conclusions: While the clinical study was not large enough to allow a statistically powered system-by-system assessment of cancer detection accuracy, the physical measures expressing spatial resolution, and signal-to-noise ratio are consistent with the published finding that sites employing CR systems had lower cancer detection rates than those using DR systems for screening mammography.

  18. Comparative analysis of the number of scientific papers entered to INIS: the case Mexico versus Brazil-Argentina

    International Nuclear Information System (INIS)

    Contreras, T.J.; Botello C, R.

    1994-01-01

    A comparative analysis about the scientific papers that input INIS National Center in Mexico has enter to the International Nuclear Information System since 1976 to date is presented. It is emphasize the number of documents as well as the participant institutions and the subject diversity. The result shows that the input of documents on behalf of Mexico is low, if it is compared with the other two Latin American countries, Brazil and Argentina, if we consider the production of technical scientific information in the field of energy sources. For this reason and on the basis of the new tematic approaches of INIS related to the environmental, economic and social aspects of energy, it is pretend to create a formal engagement with the participant institution to perform the gathering of the generated documentation in order to remit it to CIDN for the inclusion in INIS. (Author)

  19. Large Gain in Air Quality Compared to an Alternative Anthropogenic Emissions Scenario

    Science.gov (United States)

    Daskalakis, Nikos; Tsigaridis, Kostas; Myriokefalitakis, Stelios; Fanourgakis, George S.; Kanakidou, Maria

    2016-01-01

    During the last 30 years, significant effort has been made to improve air quality through legislation for emissions reduction. Global three-dimensional chemistrytransport simulations of atmospheric composition over the past 3 decades have been performed to estimate what the air quality levels would have been under a scenario of stagnation of anthropogenic emissions per capita as in 1980, accounting for the population increase (BA1980) or using the standard practice of neglecting it (AE1980), and how they compare to the historical changes in air quality levels. The simulations are based on assimilated meteorology to account for the yearto- year observed climate variability and on different scenarios of anthropogenic emissions of pollutants. The ACCMIP historical emissions dataset is used as the starting point. Our sensitivity simulations provide clear indications that air quality legislation and technology developments have limited the rapid increase of air pollutants. The achieved reductions in concentrations of nitrogen oxides, carbon monoxide, black carbon, and sulfate aerosols are found to be significant when comparing to both BA1980 and AE1980 simulations that neglect any measures applied for the protection of the environment. We also show the potentially large tropospheric air quality benefit from the development of cleaner technology used by the growing global population. These 30-year hindcast sensitivity simulations demonstrate that the actual benefit in air quality due to air pollution legislation and technological advances is higher than the gain calculated by a simple comparison against a constant anthropogenic emissions simulation, as is usually done. Our results also indicate that over China and India the beneficial technological advances for the air quality may have been masked by the explosive increase in local population and the disproportional increase in energy demand partially due to the globalization of the economy.

  20. Large gain in air quality compared to an alternative anthropogenic emissions scenario

    Directory of Open Access Journals (Sweden)

    N. Daskalakis

    2016-08-01

    Full Text Available During the last 30 years, significant effort has been made to improve air quality through legislation for emissions reduction. Global three-dimensional chemistry-transport simulations of atmospheric composition over the past 3 decades have been performed to estimate what the air quality levels would have been under a scenario of stagnation of anthropogenic emissions per capita as in 1980, accounting for the population increase (BA1980 or using the standard practice of neglecting it (AE1980, and how they compare to the historical changes in air quality levels. The simulations are based on assimilated meteorology to account for the year-to-year observed climate variability and on different scenarios of anthropogenic emissions of pollutants. The ACCMIP historical emissions dataset is used as the starting point. Our sensitivity simulations provide clear indications that air quality legislation and technology developments have limited the rapid increase of air pollutants. The achieved reductions in concentrations of nitrogen oxides, carbon monoxide, black carbon, and sulfate aerosols are found to be significant when comparing to both BA1980 and AE1980 simulations that neglect any measures applied for the protection of the environment. We also show the potentially large tropospheric air quality benefit from the development of cleaner technology used by the growing global population. These 30-year hindcast sensitivity simulations demonstrate that the actual benefit in air quality due to air pollution legislation and technological advances is higher than the gain calculated by a simple comparison against a constant anthropogenic emissions simulation, as is usually done. Our results also indicate that over China and India the beneficial technological advances for the air quality may have been masked by the explosive increase in local population and the disproportional increase in energy demand partially due to the globalization of the economy.

  1. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  2. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  3. Visual stimulus parameters seriously compromise the measurement of approximate number system acuity and comparative effects between adults and children

    Directory of Open Access Journals (Sweden)

    Denes eSzucs

    2013-07-01

    Full Text Available It has been suggested that a simple non-symbolic magnitude comparison task is sufficient to measure the acuity of a putative Approximate Number System (ANS. A proposed measure of the ANS, the so-called 'internal Weber fraction' (w, would provide a clear measure of ANS acuity. However, ANS studies have never presented adequate evidence that the visual stimulus parameters did not compromise measurements of w to such extent that w is actually driven by visual instead of numerical processes. We therefore investigated this question by testing non-symbolic magnitude discrimination in seven-year-old children and adults. We controlled for visual parameters in a more stringent manner than usual. As a consequence of these controls, in some trials numerical cues correlated positively with number while in others they correlated negatively with number. This congruency effect strongly correlated with w, which means that congruency effects were probably driving effects in w. Consequently, in both adults and children congruency had a major impact on the fit of the model underlying the computation of w. Furthermore, children showed larger congruency effects than adults. This suggests that ANS tasks are seriously compromised by the visual stimulus parameters, which cannot be controlled. Hence, they are not pure measures of the ANS and some putative w or ratio effect differences between children and adults in previous ANS studies may be due to the differential influence of the visual stimulus parameters in children and adults. In addition, because the resolution of congruency effects relies on inhibitory (interference suppression function, some previous ANS findings were probably influenced by the developmental state of inhibitory processes especially when comparing children with developmental dyscalculia and typically developing children.

  4. Commercial Internet Adoption in China: Comparing the Experience of Small, Medium and Large Businesses.

    Science.gov (United States)

    Riquelme, Hernan

    2002-01-01

    Describes a study of small, medium, and large enterprises in Shanghai, China that investigated which size companies benefit the most from the Internet. Highlights include leveling the ground for small and medium enterprises (SMEs); increased sales and cost savings for large companies; and competitive advantages. (LRW)

  5. Comparing spatial regression to random forests for large environmental data sets

    Science.gov (United States)

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputatio...

  6. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  7. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  8. The Limits and Possibilities of International Large-Scale Assessments. Education Policy Brief. Volume 9, Number 2, Spring 2011

    Science.gov (United States)

    Rutkowski, David J.; Prusinski, Ellen L.

    2011-01-01

    The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…

  9. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa

    NARCIS (Netherlands)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for

  10. Investigation into impacts of large numbers of visitors on the collection environment at Our Lord in the Attic

    NARCIS (Netherlands)

    Maekawa, S.; Ankersmit, Bart; Neuhaus, E.; Schellen, H.L.; Beltran, V.; Boersma, F.; Padfield, T.; Borchersen, K.

    2007-01-01

    Our Lord in the Attic is a historic house museum located in the historic center of Amsterdam, The Netherlands. It is a typical 17th century Dutch canal house, with a hidden Church in the attic. The Church was used regularly until 1887 when the house became a museum. The annual total number of

  11. A Few Large Roads or Many Small Ones? How to Accommodate Growth in Vehicle Numbers to Minimise Impacts on Wildlife

    Science.gov (United States)

    Rhodes, Jonathan R.; Lunney, Daniel; Callaghan, John; McAlpine, Clive A.

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity. PMID:24646891

  12. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Directory of Open Access Journals (Sweden)

    Jonathan R Rhodes

    Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  13. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Science.gov (United States)

    Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  14. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae)

    Czech Academy of Sciences Publication Activity Database

    Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.

    2017-01-01

    Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016

  15. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Comparative analyses of gene copy number and mRNA expression in GBM tumors and GBM xenografts

    Energy Technology Data Exchange (ETDEWEB)

    Hodgson, J. Graeme; Yeh, Ru-Fang; Ray, Amrita; Wang, Nicholas J.; Smirnov, Ivan; Yu, Mamie; Hariono, Sujatmi; Silber, Joachim; Feiler, Heidi S.; Gray, Joe W.; Spellman, Paul T.; Vandenberg, Scott R.; Berger, Mitchel S.; James, C. David

    2009-04-03

    Development of model systems that recapitulate the molecular heterogeneity observed among glioblastoma multiforme (GBM) tumors will expedite the testing of targeted molecular therapeutic strategies for GBM treatment. In this study, we profiled DNA copy number and mRNA expression in 21 independent GBM tumor lines maintained as subcutaneous xenografts (GBMX), and compared GBMX molecular signatures to those observed in GBM clinical specimens derived from the Cancer Genome Atlas (TCGA). The predominant copy number signature in both tumor groups was defined by chromosome-7 gain/chromosome-10 loss, a poor-prognosis genetic signature. We also observed, at frequencies similar to that detected in TCGA GBM tumors, genomic amplification and overexpression of known GBM oncogenes, such as EGFR, MDM2, CDK6, and MYCN, and novel genes, including NUP107, SLC35E3, MMP1, MMP13, and DDX1. The transcriptional signature of GBMX tumors, which was stable over multiple subcutaneous passages, was defined by overexpression of genes involved in M phase, DNA replication, and chromosome organization (MRC) and was highly similar to the poor-prognosis mitosis and cell-cycle module (MCM) in GBM. Assessment of gene expression in TCGA-derived GBMs revealed overexpression of MRC cancer genes AURKB, BIRC5, CCNB1, CCNB2, CDC2, CDK2, and FOXM1, which form a transcriptional network important for G2/M progression and/or checkpoint activation. Our study supports propagation of GBM tumors as subcutaneous xenografts as a useful approach for sustaining key molecular characteristics of patient tumors, and highlights therapeutic opportunities conferred by this GBMX tumor panel for testing targeted therapeutic strategies for GBM treatment.

  17. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  18. Hungarian Marfan family with large FBN1 deletion calls attention to copy number variation detection in the current NGS era

    Science.gov (United States)

    Ágg, Bence; Meienberg, Janine; Kopps, Anna M.; Fattorini, Nathalie; Stengl, Roland; Daradics, Noémi; Pólos, Miklós; Bors, András; Radovits, Tamás; Merkely, Béla; De Backer, Julie; Szabolcs, Zoltán; Mátyás, Gábor

    2018-01-01

    Copy number variations (CNVs) comprise about 10% of reported disease-causing mutations in Mendelian disorders. Nevertheless, pathogenic CNVs may have been under-detected due to the lack or insufficient use of appropriate detection methods. In this report, on the example of the diagnostic odyssey of a patient with Marfan syndrome (MFS) harboring a hitherto unreported 32-kb FBN1 deletion, we highlight the need for and the feasibility of testing for CNVs (>1 kb) in Mendelian disorders in the current next-generation sequencing (NGS) era. PMID:29850152

  19. Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection at large Rayleigh numbers

    Science.gov (United States)

    Kozitskiy, Sergey

    2018-05-01

    Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.

  20. Evaluation of Large-Scale Public-Sector Reforms: A Comparative Analysis

    Science.gov (United States)

    Breidahl, Karen N.; Gjelstrup, Gunnar; Hansen, Hanne Foss; Hansen, Morten Balle

    2017-01-01

    Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our…

  1. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  2. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  3. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  4. Instability and associated roll structure of Marangoni convection in high Prandtl number liquid bridge with large aspect ratio

    Science.gov (United States)

    Yano, T.; Nishino, K.; Kawamura, H.; Ueno, I.; Matsumoto, S.

    2015-02-01

    This paper reports the experimental results on the instability and associated roll structures (RSs) of Marangoni convection in liquid bridges formed under the microgravity environment on the International Space Station. The geometry of interest is high aspect ratio (AR = height/diameter ≥ 1.0) liquid bridges of high Prandtl number fluids (Pr = 67 and 207) suspended between coaxial disks heated differentially. The unsteady flow field and associated RSs were revealed with the three-dimensional particle tracking velocimetry. It is found that the flow field after the onset of instability exhibits oscillations with azimuthal mode number m = 1 and associated RSs traveling in the axial direction. The RSs travel in the same direction as the surface flow (co-flow direction) for 1.00 ≤ AR ≤ 1.25 while they travel in the opposite direction (counter-flow direction) for AR ≥ 1.50, thus showing the change of traveling directions with AR. This traveling direction for AR ≥ 1.50 is reversed to the co-flow direction when the temperature difference between the disks is increased to the condition far beyond the critical one. This change of traveling directions is accompanied by the increase of the oscillation frequency. The characteristics of the RSs for AR ≥ 1.50, such as the azimuthal mode of oscillation, the dimensionless oscillation frequency, and the traveling direction, are in reasonable agreement with those of the previous sounding rocket experiment for AR = 2.50 and those of the linear stability analysis of an infinite liquid bridge.

  5. Large impacted upper ureteral calculi: A comparative study between retrograde ureterolithotripsy and percutaneous antegrade ureterolithotripsy in the modified lateral position.

    Science.gov (United States)

    Moufid, Kamal; Abbaka, Najib; Touiti, Driss; Adermouch, Latifa; Amine, Mohamed; Lezrek, Mohammed

    2013-07-01

    The treatment for patients with large impacted proximal ureteral stone remains controversial, especially at institutions with limited resources. The aim of this study is to compare and to evaluate the outcome and complications of two main treatment procedures for impacted proximal ureteral calculi, retrograde ureterolithotripsy (URS), and percutaneous antegrade ureterolithotripsy (Perc-URS). Our inclusion criteria were solitary, radiopaque calculi, >15 mm in size in a functioning renal unit. Only those patients in whom the attempt at passing a guidewire or catheter beyond the calculus failed were included in this study. Between January 2007 and July 2011, a total of 52 patients (13 women and 39 men) with large impacted upper-ureteral calculi >15 mm and meeting the inclusion criteria were selected. Of these, Perc-URS was done in 22 patients (group 1) while retrograde ureteroscopy was performed in 30 patients (group 2). We analyzed operative time, incidence of complications during and after surgery, the number of postoperative recovery days, median total costs associated per patient per procedure, and the stone-free rate immediately after 5 days and after 1 month. Bivariate analysis used the Student t-test and the Mann-Whitney test to compare two means and Chi-square and Fisher's exact tests to compare two percentages. The significance level was set at 0.05. The mean age was 42.3 years (range 22-69). The mean stone sizes (mm) were 34 ± 1.2 and 29.3 ± 1.8 mm in group 1 and 2, respectively. In the Perc-URS group, 21 patients (95.45%) had complete calculus clearance through a single tract in one session of percutaneous surgery, whereas in the URS group, only 20 patients (66.7%) had complete stone clearance (P = 0.007). The mean operative time was higher in the Perc-URS group compared to group 2 (66.5 ± 21.7 vs. 52.13 ± 17.3 min, respectively; P = 0.013). Complications encountered in group 1 included transient postoperative fever (2 pts) and simple urine outflow (2

  6. Comparative Visual Analysis of Large Customer Feedback Based on Self-Organizing Sentiment Maps

    OpenAIRE

    Janetzko, Halldór; Jäckle, Dominik; Schreck, Tobias

    2013-01-01

    Textual customer feedback data, e.g., received by surveys or incoming customer email notifications, can be a rich source of information with many applications in Customer Relationship Management (CRM). Nevertheless, to date this valuable source of information is often neglected in practice, as service managers would have to read manually through potentially large amounts of feedback text documents to extract actionable information. As in many cases, a purely manual approach is not feasible, w...

  7. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  8. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  9. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    Science.gov (United States)

    2014-06-01

    System Number CAIDA Cooperative Association of Internet Data Analysis GB gigabyte IETF IPv4 IP IPv6 ISP NPS NTC RFC RTT TTL ICMP NPS ESD VSD TCP UDP DoS...including, DIMES, IPlane, Ark IPv4 All Prefix /24 and recently NPS probing methodol- ogy. NPS probing methodology is different from the others because it...trace, a history of the forward interface-level path and time to send and acknowledge are available to analyze. However, traceroute may not return

  10. A LARGE NUMBER OF z > 6 GALAXIES AROUND A QSO AT z = 6.43: EVIDENCE FOR A PROTOCLUSTER?

    International Nuclear Information System (INIS)

    Utsumi, Yousuke; Kashikawa, Nobunari; Miyazaki, Satoshi; Komiyama, Yutaka; Goto, Tomotsugu; Furusawa, Hisanori; Overzier, Roderik

    2010-01-01

    QSOs have been thought to be important for tracing highly biased regions in the early universe, from which the present-day massive galaxies and galaxy clusters formed. While overdensities of star-forming galaxies have been found around QSOs at 2 6 is less clear. Previous studies with the Hubble Space Telescope (HST) have reported the detection of small excesses of faint dropout galaxies in some QSO fields, but these surveys probed a relatively small region surrounding the QSOs. To overcome this problem, we have observed the most distant QSO at z = 6.4 using the large field of view of the Suprime-Cam (34' x 27'). Newly installed red-sensitive fully depleted CCDs allowed us to select Lyman break galaxies (LBGs) at z ∼ 6.4 more efficiently. We found seven LBGs in the QSO field, whereas only one exists in a comparison field. The significance of this apparent excess is difficult to quantify without spectroscopic confirmation and additional control fields. The Poisson probability to find seven objects when one expects four is ∼10%, while the probability to find seven objects in one field and only one in the other is less than 0.4%, suggesting that the QSO field is significantly overdense relative to the control field. These conclusions are supported by a comparison with a cosmological smoothed particle hydrodynamics simulation which includes the higher order clustering of galaxies. We find some evidence that the LBGs are distributed in a ring-like shape centered on the QSO with a radius of ∼3 Mpc. There are no candidate LBGs within 2 Mpc from the QSO, i.e., galaxies are clustered around the QSO but appear to avoid the very center. These results suggest that the QSO is embedded in an overdense region when defined on a sufficiently large scale (i.e., larger than an HST/ACS pointing). This suggests that the QSO was indeed born in a massive halo. The central deficit of galaxies may indicate that (1) the strong UV radiation from the QSO suppressed galaxy formation in

  11. Effect of the Hartmann number on phase separation controlled by magnetic field for binary mixture system with large component ratio

    Science.gov (United States)

    Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng

    2017-11-01

    This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.

  12. CoVennTree: A new method for the comparative analysis of large datasets

    Directory of Open Access Journals (Sweden)

    Steffen C. Lott

    2015-02-01

    Full Text Available The visualization of massive datasets, such as those resulting from comparative metatranscriptome analyses or the analysis of microbial population structures using ribosomal RNA sequences, is a challenging task. We developed a new method called CoVennTree (Comparative weighted Venn Tree that simultaneously compares up to three multifarious datasets by aggregating and propagating information from the bottom to the top level and produces a graphical output in Cytoscape. With the introduction of weighted Venn structures, the contents and relationships of various datasets can be correlated and simultaneously aggregated without losing information. We demonstrate the suitability of this approach using a dataset of 16S rDNA sequences obtained from microbial populations at three different depths of the Gulf of Aqaba in the Red Sea. CoVennTree has been integrated into the Galaxy ToolShed and can be directly downloaded and integrated into the user instance.

  13. Technology interactions among low-carbon energy technologies: What can we learn from a large number of scenarios?

    International Nuclear Information System (INIS)

    McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.

    2011-01-01

    Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is

  14. The Use of Illustrations in Large-Scale Science Assessment: A Comparative Study

    Science.gov (United States)

    Wang, Chao

    2012-01-01

    This dissertation addresses the complexity of test illustrations design across cultures. More specifically, it examines how the characteristics of illustrations used in science test items vary across content areas, assessment programs, and cultural origins. It compares a total of 416 Grade 8 illustrated items from the areas of earth science, life…

  15. PISA - An Example of the Use and Misuse of Large-Scale Comparative Tests

    DEFF Research Database (Denmark)

    Dolin, Jens

    2007-01-01

    The article will analyse PISA - particularly the part dealing with science - as an example of a major comparative evaluation. PISA will first be described and then analysed on the basis of test theory, which will address some detailed technical aspects of the test as well as the broader issue...

  16. Comparing the hierarchy of author given tags and repository given tags in a large document archive

    Science.gov (United States)

    Tibély, Gergely; Pollner, Péter; Palla, Gergely

    2016-10-01

    Folksonomies - large databases arising from collaborative tagging of items by independent users - are becoming an increasingly important way of categorizing information. In these systems users can tag items with free words, resulting in a tripartite item-tag-user network. Although there are no prescribed relations between tags, the way users think about the different categories presumably has some built in hierarchy, in which more special concepts are descendants of some more general categories. Several applications would benefit from the knowledge of this hierarchy. Here we apply a recent method to check the differences and similarities of hierarchies resulting from tags given by independent individuals and from tags given by a centrally managed repository system. The results from our method showed substantial differences between the lower part of the hierarchies, and in contrast, a relatively high similarity at the top of the hierarchies.

  17. Comparative genomic hybridizations reveal absence of large Streptomyces coelicolor genomic islands in Streptomyces lividans

    OpenAIRE

    Jayapal, Karthik P; Lian, Wei; Glod, Frank; Sherman, David H; Hu, Wei-Shou

    2007-01-01

    Abstract Background The genomes of Streptomyces coelicolor and Streptomyces lividans bear a considerable degree of synteny. While S. coelicolor is the model streptomycete for studying antibiotic synthesis and differentiation, S. lividans is almost exclusively considered as the preferred host, among actinomycetes, for cloning and expression of exogenous DNA. We used whole genome microarrays as a comparative genomics tool for identifying the subtle differences between these two chromosomes. Res...

  18. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  19. A Comparative Study on Controllers for Improving Transient Stability of DFIG Wind Turbines During Large Disturbances

    Directory of Open Access Journals (Sweden)

    Minh Quan Duong

    2018-02-01

    Full Text Available Under power system short-circuits, the Doubly-Fed Induction Generator (DFIG Wind Turbines (WT are required to be equipped with crowbar protections to preserve the lifetime of power electronics devices. When the crowbar is switched on, the rotor windings are short-circuited. In this case, the DFIG behaves like a squirrel-cage induction generator (SCIG and can adsorb reactive power, which can affect the power system. A DFIG based-fault-ride through (FRT scheme with crowbar, rotor-side and grid-side converters has recently been proposed for improving the transient stability: in particular, a hybrid cascade Fuzzy-PI-based controlling technique has been demonstrated to be able to control the Insulated Gate Bipolar Transistor (IGBT based frequency converter in order to enhance the transient stability. The performance of this hybrid control scheme is analyzed here and compared to other techniques, under a three-phase fault condition on a single machine connected to the grid. In particular, the transient operation of the system is investigated by comparing the performance of the hybrid system with conventional proportional-integral and fuzzy logic controller, respectively. The system validation is carried out in Simulink, confirming the effectiveness of the coordinated advanced fuzzy logic control.

  20. Comparing large lecture mechanics curricula using the Force Concept Inventory: A five thousand student study

    Science.gov (United States)

    Caballero, Marcos D.; Greco, Edwin F.; Murray, Eric R.; Bujak, Keith R.; Jackson Marr, M.; Catrambone, Richard; Kohlmyer, Matthew A.; Schatz, Michael F.

    2012-07-01

    The performance of over 5000 students in introductory calculus-based mechanics courses at the Georgia Institute of Technology was assessed using the Force Concept Inventory (FCI). Results from two different curricula were compared: a traditional mechanics curriculum and the Matter & Interactions (M&I) curriculum. Both were taught with similar interactive pedagogy. Post-instruction FCI averages were significantly higher for the traditional curriculum than for the M&I curriculum; the differences between curricula persist after accounting for factors such as pre-instruction FCI scores, grade point averages, and SAT scores. FCI performance on categories of items organized by concepts was also compared; traditional averages were significantly higher in each concept. We examined differences in student preparation between the curricula and found that the relative fraction of homework and lecture topics devoted to FCI force and motion concepts correlated with the observed performance differences. Concept inventories, as instruments for evaluating curricular reforms, are generally limited to the particular choice of content and goals of the instrument. Moreover, concept inventories fail to measure what are perhaps the most interesting aspects of reform: the non-overlapping content and goals that are not present in courses without reform.

  1. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    Science.gov (United States)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  2. Comparative funding consequences of large versus small gas-fired power generation units

    International Nuclear Information System (INIS)

    Johnson, N.G.

    1995-01-01

    Gas producers are increasingly looking to privately-owned gas-fired power generation as a major growth market to support the development of new fields being discovered across Australia. Gas-fired generating technology is more environmentally friendly than coal-fired power stations, has lower unit capital costs and has higher efficiency levels. With the recent downward trends in gas prices for power generation (especially in Western Australia) it is likely that gas will indeed be the consistently preferred fuel for generation in Australia. Gas producers should be sensitive to the different financial and risk characteristics of the potential market represented by large versus small gas-fired private power stations. These differences are exaggerated by the much sharper focus given by the private sector to quantify risk and to its allocation to the parties best able to manage it. The significant commercial differences between classes of generation projects result in gas producers themselves being exposed to diverging risk profiles through their gas supply contracts with generating companies. Selling gas to larger generation units results in gas suppliers accepting proportionately (i.e. not just prorata to the larger installed capacity) higher levels of financial risk. Risk arises from the higher probability of a project not being completed, from the increased size of penalty payments associated with non-delivery of gas and from the rising level of competition between gas suppliers. Gas producers must fully understand the economics and risks of their potential electricity customers and full financial analysis will materially help the gas supplier in subsequent commercial gas contract negotiations. (author). 1 photo

  3. Large-scale proteome comparative analysis of developing rhizomes of the ancient vascular plant Equisetum hyemale.

    Directory of Open Access Journals (Sweden)

    Tiago Santana Balbuena

    2012-06-01

    Full Text Available Equisetum hyemale is a widespread vascular plant species, whose reproduction is mainly dependent on the growth and development of the rhizomes. Due to its key evolutionary position, the identification of factors that could be involved in the existence of the rhizomatous trait may contribute to a better understanding of the role of this underground organ for the successful propagation of this and other plant species. In the present work, we characterized the proteome of E. hyemale rhizomes using a GeLC-MS spectral-counting proteomics strategy. A total of 1,911 and 1,860 non-redundant proteins were identified in the rhizomes apical tip and elongation zone, respectively. Rhizome- characteristic proteins were determined by comparisons of the developing rhizome tissues to developing roots. A total of 87 proteins were found to be up-regulated in both E. hyemale rhizome tissues in relation to developing roots. Hierarchical clustering indicated a vast dynamic range in the expression of the 87 characteristic proteins and revealed, based on the expression profile, the existence of 9 major protein groups. Gene ontology analyses suggested an over-representation of the terms involved in macromolecular and protein biosynthetic processes, gene expression and nucleotide and protein binding functions. Spatial differences analysis between the rhizome apical tip and the elongation zone revealed that only eight proteins were up-regulated in the apical tip including RNA-binding proteins and an acyl carrier protein, as well as a KH-domain protein and a T-complex subunit; while only seven proteins were up-regulated in the elongation zone including phosphomannomutase, galactomannan galactosyltransferase, endoglucanase 10 and 25 and mannose-1-phosphate guanyltransferase subunits alpha and beta. This is the first large scale characterization of the proteome of a plant rhizome. Implications of the findings were discussed in relation to other underground organs and related

  4. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  5. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    International Nuclear Information System (INIS)

    Hasegawa, K.; Lim, C.S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario

  6. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    Science.gov (United States)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  7. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    OpenAIRE

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  8. Comparing Children's Performance on and Preference for a Number-Line Estimation Task: Tablet versus Paper and Pencil

    Science.gov (United States)

    Piatt, Carley; Coret, Marian; Choi, Michael; Volden, Joanne; Bisanz, Jeffrey

    2016-01-01

    Tablet computers (tablets) are positioned to be powerful, innovative, effective, and motivating research and assessment tools. We addressed two questions critical for evaluating the appropriateness of using tablets to study number-line estimation, a skill associated with math achievement and argued to be central to numerical cognition. First, is…

  9. Innovations in Tertiary Education Financing: A Comparative Evaluation of Allocation Mechanisms. Education Working Paper Series. Number 4

    Science.gov (United States)

    Salmi, Jamil; Hauptman, Arthur M.

    2006-01-01

    In recent decades, a growing number of countries have sought innovative solutions to the substantial challenges they face in financing tertiary education. One of the principal challenges is that the demand for education beyond the secondary level in most countries around the world is growing far faster than the ability or willingness of…

  10. Comparative analyses of microbial structures and gene copy numbers in the anaerobic digestion of various types of sewage sludge.

    Science.gov (United States)

    Hidaka, Taira; Tsushima, Ikuo; Tsumori, Jun

    2018-04-01

    Anaerobic co-digestion of various sewage sludges is a promising approach for greater recovery of energy, but the process is more complicated than mono-digestion of sewage sludge. The applicability of microbial structure analyses and gene quantification to understand microbial conditions was evaluated. The results show that information from gene analyses is useful in managing anaerobic co-digestion and damaged microbes in addition to conventional parameters like total solids, pH and biogas production. Total bacterial 16S rRNA gene copy numbers are the most useful tools for evaluating unstable anaerobic digestion of sewage sludge, rather than mcrA and total archaeal 16S rRNA gene copy numbers, and high-throughput sequencing. First order decay rates of gene copy numbers during pH failure were higher than typical decay rates of microbes in stable operation. The sequencing analyses, including multidimensional scaling, showed very different microbial structure shifts, but the results were not consistent. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Comparing centralised and decentralised anaerobic digestion of stillage from a large-scale bioethanol plant to animal feed production.

    Science.gov (United States)

    Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R

    2008-01-01

    A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.

  12. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  13. Clinical significance of rare copy number variations in epilepsy: a case-control survey using microarray-based comparative genomic hybridization.

    Science.gov (United States)

    Striano, Pasquale; Coppola, Antonietta; Paravidino, Roberta; Malacarne, Michela; Gimelli, Stefania; Robbiano, Angela; Traverso, Monica; Pezzella, Marianna; Belcastro, Vincenzo; Bianchi, Amedeo; Elia, Maurizio; Falace, Antonio; Gazzerro, Elisabetta; Ferlazzo, Edoardo; Freri, Elena; Galasso, Roberta; Gobbi, Giuseppe; Molinatto, Cristina; Cavani, Simona; Zuffardi, Orsetta; Striano, Salvatore; Ferrero, Giovanni Battista; Silengo, Margherita; Cavaliere, Maria Luigia; Benelli, Matteo; Magi, Alberto; Piccione, Maria; Dagna Bricarelli, Franca; Coviello, Domenico A; Fichera, Marco; Minetti, Carlo; Zara, Federico

    2012-03-01

    To perform an extensive search for genomic rearrangements by microarray-based comparative genomic hybridization in patients with epilepsy. Prospective cohort study. Epilepsy centers in Italy. Two hundred seventy-nine patients with unexplained epilepsy, 265 individuals with nonsyndromic mental retardation but no epilepsy, and 246 healthy control subjects were screened by microarray-based comparative genomic hybridization. Identification of copy number variations (CNVs) and gene enrichment. Rare CNVs occurred in 26 patients (9.3%) and 16 healthy control subjects (6.5%) (P = .26). The CNVs identified in patients were larger (P = .03) and showed higher gene content (P = .02) than those in control subjects. The CNVs larger than 1 megabase (P = .002) and including more than 10 genes (P = .005) occurred more frequently in patients than in control subjects. Nine patients (34.6%) among those harboring rare CNVs showed rearrangements associated with emerging microdeletion or microduplication syndromes. Mental retardation and neuropsychiatric features were associated with rare CNVs (P = .004), whereas epilepsy type was not. The CNV rate in patients with epilepsy and mental retardation or neuropsychiatric features is not different from that observed in patients with mental retardation only. Moreover, significant enrichment of genes involved in ion transport was observed within CNVs identified in patients with epilepsy. Patients with epilepsy show a significantly increased burden of large, rare, gene-rich CNVs, particularly when associated with mental retardation and neuropsychiatric features. The limited overlap between CNVs observed in the epilepsy group and those observed in the group with mental retardation only as well as the involvement of specific (ion channel) genes indicate a specific association between the identified CNVs and epilepsy. Screening for CNVs should be performed for diagnostic purposes preferentially in patients with epilepsy and mental retardation or

  14. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  15. Enhancement of phase space density by increasing trap anisotropy in a magneto-optical trap with a large number of atoms

    International Nuclear Information System (INIS)

    Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.

    2004-01-01

    The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms

  16. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  17. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    Science.gov (United States)

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  18. A comparative economic assessment of hydrogen production from large central versus smaller distributed plant in a carbon constrained world

    International Nuclear Information System (INIS)

    Nguyen, Y.V.; Ngo, Y.A.; Tinkler, M.J.; Cowan, N.

    2003-01-01

    This paper compares the economics of producing hydrogen at large central plants versus smaller distributed plants at user sites. The economics of two types of central plant, each at 100 million standard cubic feet per day of hydrogen, based on electrolysis and natural gas steam reforming technologies, will be discussed. The additional cost of controlling CO 2 emissions from the natural gas steam reforming plant will be included in the analysis in order to satisfy the need to live in a future carbon constrained world. The cost of delivery of hydrogen from the large central plant to the user sites in a large metropolitan area will be highlighted, and the delivered cost will be compared to the cost from on-site distributed generation plants. Five types of distributed generation plants, based on proton exchange membrane, alkaline electrolysis and advanced steam reforming, will be analysed and discussed. Two criteria were used to rank various hydrogen production options, the cost of production and the price of hydrogen to achieve an acceptable return of investment. (author)

  19. Comparative genome analysis identifies two large deletions in the genome of highly-passaged attenuated Streptococcus agalactiae strain YM001 compared to the parental pathogenic strain HN016.

    Science.gov (United States)

    Wang, Rui; Li, Liping; Huang, Yan; Luo, Fuguang; Liang, Wanwen; Gan, Xi; Huang, Ting; Lei, Aiying; Chen, Ming; Chen, Lianfu

    2015-11-04

    Streptococcus agalactiae (S. agalactiae), also known as group B Streptococcus (GBS), is an important pathogen for neonatal pneumonia, meningitis, bovine mastitis, and fish meningoencephalitis. The global outbreaks of Streptococcus disease in tilapia cause huge economic losses and threaten human food hygiene safety as well. To investigate the mechanism of S. agalactiae pathogenesis in tilapia and develop attenuated S. agalactiae vaccine, this study sequenced and comparatively analyzed the whole genomes of virulent wild-type S. agalactiae strain HN016 and its highly-passaged attenuated strain YM001 derived from tilapia. We performed Illumina sequencing of DNA prepared from strain HN016 and YM001. Sequencedreads were assembled and nucleotide comparisons, single nucleotide polymorphism (SNP) , indels were analyzed between the draft genomes of HN016 and YM001. Clustered regularly interspaced short palindromic repeats (CRISPRs) and prophage were detected and analyzed in different S. agalactiae strains. The genome of S. agalactiae YM001 was 2,047,957 bp with a GC content of 35.61 %; it contained 2044 genes and 88 RNAs. Meanwhile, the genome of S. agalactiae HN016 was 2,064,722 bp with a GC content of 35.66 %; it had 2063 genes and 101 RNAs. Comparative genome analysis indicated that compared with HN016, YM001 genome had two significant large deletions, at the sizes of 5832 and 11,116 bp respectively, resulting in the deletion of three rRNA and ten tRNA genes, as well as the deletion and functional damage of ten genes related to metabolism, transport, growth, anti-stress, etc. Besides these two large deletions, other ten deletions and 28 single nucleotide variations (SNVs) were also identified, mainly affecting the metabolism- and growth-related genes. The genome of attenuated S. agalactiae YM001 showed significant variations, resulting in the deletion of 10 functional genes, compared to the parental pathogenic strain HN016. The deleted and mutated functional genes all

  20. Comparative genomics analysis of rice and pineapple contributes to understand the chromosome number reduction and genomic changes in grasses

    Directory of Open Access Journals (Sweden)

    Jinpeng Wang

    2016-10-01

    Full Text Available Rice is one of the most researched model plant, and has a genome structure most resembling that of the grass common ancestor after a grass common tetraploidization ~100 million years ago. There has been a standing controversy whether there had been 5 or 7 basic chromosomes, before the tetraploidization, which were tackled but could not be well solved for the lacking of a sequenced and assembled outgroup plant to have a conservative genome structure. Recently, the availability of pineapple genome, which has not been subjected to the grass-common tetraploidization, provides a precious opportunity to solve the above controversy and to research into genome changes of rice and other grasses. Here, we performed a comparative genomics analysis of pineapple and rice, and found solid evidence that grass-common ancestor had 2n =2x =14 basic chromosomes before the tetraploidization and duplicated to 2n = 4x = 28 after the event. Moreover, we proposed that enormous gene missing from duplicated regions in rice should be explained by an allotetraploid produced by prominently divergent parental lines, rather than gene losses after their divergence. This means that genome fractionation might have occurred before the formation of the allotetraploid grass ancestor.

  1. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  2. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  3. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  4. A comparative study of scale-adaptive and large-eddy simulations of highly swirling turbulent flow through an abrupt expansion

    International Nuclear Information System (INIS)

    Javadi, Ardalan; Nilsson, Håkan

    2014-01-01

    The strongly swirling turbulent flow through an abrupt expansion is investigated using highly resolved LES and SAS, to shed more light on the stagnation region and the helical vortex breakdown. The vortex breakdown in an abrupt expansion resembles the so-called vortex rope occurring in hydro power draft tubes. It is known that the large-scale helical vortex structures can be captured by regular RANS turbulence models. However, the spurious suppression of the small-scale structures should be avoided using less diffusive methods. The present work compares LES and SAS results with the experimental measurement of Dellenback et al. (1988). The computations are conducted using a general non-orthogonal finite-volume method with a fully collocated storage available in the OpenFOAM-2.1.x CFD code. The dynamics of the flow is studied at two Reynolds numbers, Re=6.0×10 4 and Re=10 5 , at the almost constant high swirl numbers of Sr=1.16 and Sr=1.23, respectively. The time-averaged velocity and pressure fields and the root mean square of the velocity fluctuations, are captured and investigated qualitatively. The flow with the lower Reynolds number gives a much weaker outburst although the frequency of the structures seems to be constant for the plateau swirl number

  5. COMPAR

    International Nuclear Information System (INIS)

    Kuefner, K.

    1976-01-01

    COMPAR works on FORTRAN arrays with four indices: A = A(i,j,k,l) where, for each fixed k 0 ,l 0 , only the 'plane' [A(i,j,k 0 ,l 0 ), i = 1, isub(max), j = 1, jsub(max)] is held in fast memory. Given two arrays A, B of this type COMPAR has the capability to 1) re-norm A and B ind different ways; 2) calculate the deviations epsilon defined as epsilon(i,j,k,l): =[A(i,j,k,l) - B(i,j,k,l)] / GEW(i,j,k,l) where GEW (i,j,k,l) may be chosen in three different ways; 3) calculate mean, standard deviation and maximum in the array epsilon (by several intermediate stages); 4) determine traverses in the array epsilon; 5) plot these traverses by a printer; 6) simplify plots of these traverses by the PLOTEASY-system by creating input data blocks for this system. The main application of COMPAR is given (so far) by the comparison of two- and three-dimensional multigroup neutron flux-fields. (orig.) [de

  6. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  7. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    Science.gov (United States)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  8. Comparative analysis on arthroscopic sutures of large and extensive rotator cuff injuries in relation to the degree of osteopenia

    Directory of Open Access Journals (Sweden)

    Alexandre Almeida

    2015-02-01

    Full Text Available OBJECTIVE: To analyze the results from arthroscopic suturing of large and extensive rotator cuff injuries, according to the patient's degree of osteopenia.METHOD: 138 patients who underwent arthroscopic suturing of large and extensive rotator cuff injuries between 2003 and 2011 were analyzed. Those operated from October 2008 onwards formed a prospective cohort, while the remainder formed a retrospective cohort. Also from October 2008 onwards, bone densitometry evaluation was requested at the time of the surgical treatment. For the patients operated before this date, densitometry examinations performed up to two years before or after the surgical treatment were investigated. The patients were divided into three groups. Those with osteoporosis formed group 1 (n = 16; those with osteopenia, group 2 (n = 33; and normal individuals, group 3 (n = 55.RESULTS: In analyzing the University of California at Los Angeles (UCLA scores of group 3 and comparing them with group 2, no statistically significant difference was seen (p = 0.070. Analysis on group 3 in comparison with group 1 showed a statistically significant difference (p = 0.027.CONCLUSION: The results from arthroscopic suturing of large and extensive rotator cuff injuries seem to be influenced by the patient's bone mineral density, as assessed using bone densitometry.

  9. Towards Development of Clustering Applications for Large-Scale Comparative Genotyping and Kinship Analysis Using Y-Short Tandem Repeats.

    Science.gov (United States)

    Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki

    2015-06-01

    Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.

  10. PACOM: A Versatile Tool for Integrating, Filtering, Visualizing, and Comparing Multiple Large Mass Spectrometry Proteomics Data Sets.

    Science.gov (United States)

    Martínez-Bartolomé, Salvador; Medina-Aunon, J Alberto; López-García, Miguel Ángel; González-Tejedo, Carmen; Prieto, Gorka; Navajas, Rosana; Salazar-Donate, Emilio; Fernández-Costa, Carolina; Yates, John R; Albar, Juan Pablo

    2018-04-06

    Mass-spectrometry-based proteomics has evolved into a high-throughput technology in which numerous large-scale data sets are generated from diverse analytical platforms. Furthermore, several scientific journals and funding agencies have emphasized the storage of proteomics data in public repositories to facilitate its evaluation, inspection, and reanalysis. (1) As a consequence, public proteomics data repositories are growing rapidly. However, tools are needed to integrate multiple proteomics data sets to compare different experimental features or to perform quality control analysis. Here, we present a new Java stand-alone tool, Proteomics Assay COMparator (PACOM), that is able to import, combine, and simultaneously compare numerous proteomics experiments to check the integrity of the proteomic data as well as verify data quality. With PACOM, the user can detect source of errors that may have been introduced in any step of a proteomics workflow and that influence the final results. Data sets can be easily compared and integrated, and data quality and reproducibility can be visually assessed through a rich set of graphical representations of proteomics data features as well as a wide variety of data filters. Its flexibility and easy-to-use interface make PACOM a unique tool for daily use in a proteomics laboratory. PACOM is available at https://github.com/smdb21/pacom .

  11. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  12. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  13. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  14. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  15. Outcome of Large to Massive Rotator Cuff Tears Repaired With and Without Extracellular Matrix Augmentation: A Prospective Comparative Study.

    Science.gov (United States)

    Gilot, Gregory J; Alvarez-Pinzon, Andres M; Barcksdale, Leticia; Westerdahl, David; Krill, Michael; Peck, Evan

    2015-08-01

    To compare the results of arthroscopic repair of large to massive rotator cuff tears (RCTs) with or without augmentation using an extracellular matrix (ECM) graft and to present ECM graft augmentation as a valuable surgical alternative used for biomechanical reinforcement in any RCT repair. We performed a prospective, blinded, single-center, comparative study of patients who underwent arthroscopic repair of a large to massive RCT with or without augmentation with ECM graft. The primary outcome was assessed by the presence or absence of a retear of the previously repaired rotator cuff, as noted on ultrasound examination. The secondary outcomes were patient satisfaction evaluated preoperatively and postoperatively using the 12-item Short Form Health Survey, the American Shoulder and Elbow Surgeons shoulder outcome score, a visual analog scale score, the Western Ontario Rotator Cuff index, and a shoulder activity level survey. We enrolled 35 patients in the study: 20 in the ECM-augmented rotator cuff repair group and 15 in the control group. The follow-up period ranged from 22 to 26 months, with a mean of 24.9 months. There was a significant difference between the groups in terms of the incidence of retears: 26% (4 retears) in the control group and 10% (2 retears) in the ECM graft group (P = .0483). The mean pain level decreased from 6.9 to 4.1 in the control group and from 6.8 to 0.9 in the ECM graft group (P = .024). The American Shoulder and Elbow Surgeons score improved from 62.1 to 72.6 points in the control group and from 63.8 to 88.9 points (P = .02) in the treatment group. The mean Short Form 12 scores improved in the 2 groups, with a statistically significant difference favoring graft augmentation (P = .031), and correspondingly, the Western Ontario Rotator Cuff index scores improved in both arms, favoring the treatment group (P = .0412). The use of ECM for augmentation of arthroscopic repairs of large to massive RCTs reduces the incidence of retears

  16. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    Science.gov (United States)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  17. Changes in the number of nesting pairs and breeding success of theWhite Stork Ciconia ciconia in a large city and a neighbouring rural area in South-West Poland

    Directory of Open Access Journals (Sweden)

    Kopij Grzegorz

    2017-12-01

    Full Text Available During the years 1994–2009, the number of White Stork pairs breeding in the city of Wrocław (293 km2 fluctuated between 5 pairs in 1999 and 19 pairs 2004. Most nests were clumped in two sites in the Odra river valley. Two nests were located only cca. 1 km from the city hall. The fluctuations in numbers can be linked to the availability of feeding grounds and weather. In years when grass was mowed in the Odra valley, the number of White Storks was higher than in years when the grass was left unattended. Overall, the mean number of fledglings per successful pair during the years 1995–2009 was slightly higher in the rural than in the urban area. Contrary to expectation, the mean number of fledglings per successful pair was the highest in the year of highest population density. In two rural counties adjacent to Wrocław, the number of breeding pairs was similar to that in the city in 1994/95 (15 vs. 13 pairs. However, in 2004 the number of breeding pairs in the city almost doubled compared to that in the neighboring counties (10 vs. 19 pairs. After a sharp decline between 2004 and 2008, populations in both areas were similar in 2009 (5 vs. 4 pairs, but much lower than in 1994–1995. Wrocław is probably the only large city (>100,000 people in Poland, where the White Stork has developed a sizeable, although fluctuating, breeding population. One of the most powerful role the city-nesting White Storks may play is their ability to engage directly citizens with nature and facilitate in that way environmental education and awareness.

  18. Advances in a framework to compare bio-dosimetry methods for triage in large-scale radiation events

    International Nuclear Information System (INIS)

    Flood, Ann Barry; Boyle, Holly K.; Du, Gaixin; Demidenko, Eugene; Williams, Benjamin B.; Swartz, Harold M.; Nicolalde, Roberto J.

    2014-01-01

    Planning and preparation for a large-scale nuclear event would be advanced by assessing the applicability of potentially available bio-dosimetry methods. Using an updated comparative framework the performance of six bio-dosimetry methods was compared for five different population sizes (100-1 000 000) and two rates for initiating processing of the marker (15 or 15 000 people per hour) with four additional time windows. These updated factors are extrinsic to the bio-dosimetry methods themselves but have direct effects on each method's ability to begin processing individuals and the size of the population that can be accommodated. The results indicate that increased population size, along with severely compromised infrastructure, increases the time needed to triage, which decreases the usefulness of many time intensive dosimetry methods. This framework and model for evaluating bio-dosimetry provides important information for policy-makers and response planners to facilitate evaluation of each method and should advance coordination of these methods into effective triage plans. (authors)

  19. Quiescent Galaxies in the 3D-HST Survey: Spectroscopic Confirmation of a Large Number of Galaxies with Relatively Old Stellar Populations at z ~ 2

    Science.gov (United States)

    Whitaker, Katherine E.; van Dokkum, Pieter G.; Brammer, Gabriel; Momcheva, Ivelina G.; Skelton, Rosalind; Franx, Marijn; Kriek, Mariska; Labbé, Ivo; Fumagalli, Mattia; Lundgren, Britt F.; Nelson, Erica J.; Patel, Shannon G.; Rix, Hans-Walter

    2013-06-01

    Quiescent galaxies at z ~ 2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically quiescent galaxies at 1.4 < z < 2.2 from the 3D-HST grism survey. In addition to Hβ (λ4861 Å), we unambiguously identify metal absorption lines in the stacked spectrum, including the G band (λ4304 Å), Mg I (λ5175 Å), and Na I (λ5894 Å). This finding demonstrates that galaxies with relatively old stellar populations already existed when the universe was ~3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3^{+0.1}_{-0.3} Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: the reddest 80% of galaxies are dominated by metal lines and have a relatively old mean age of 1.6^{+0.5}_{-0.4} Gyr, whereas the bluest (and brightest) galaxies have strong Balmer lines and a spectroscopic age of 0.9^{+0.2}_{-0.1} Gyr. Although the spectrum is dominated by an evolved stellar population, we also find [O III] and Hβ emission. Interestingly, this emission is more centrally concentrated than the continuum with {L_{{O}\\,\\scriptsize{III}}}=1.7+/- 0.3\\times 10^{40} erg s-1, indicating residual central star formation or nuclear activity.

  20. Comparing Outcomes of Coronary Artery Bypass Grafting Among Large Teaching and Urban Hospitals in China and the United States.

    Science.gov (United States)

    Zheng, Zhe; Zhang, Heng; Yuan, Xin; Rao, Chenfei; Zhao, Yan; Wang, Yun; Normand, Sharon-Lise; Krumholz, Harlan M; Hu, Shengshou

    2017-06-01

    Coronary artery disease is prevalent in China, with concomitant increases in the volume of coronary artery bypass grafting (CABG). The present study aims to compare CABG-related outcomes between China and the United States among large teaching and urban hospitals. Observational analysis of patients aged ≥18 years, discharged from acute-care, large teaching and urban hospitals in China and the United States after hospitalization for an isolated CABG surgery. Data were obtained from the Chinese Cardiac Surgery Registry in China and the National Inpatient Sample in the United States. Analysis was stratified by 2 periods: 2007, 2008, and 2010; and 2011 to 2013 periods. The primary outcome was in-hospital mortality, and the secondary outcome was length of stay. The sample included 51 408 patients: 32 040 from 77 hospitals in the China-CABG group and 19 368 from 303 hospitals in the US-CABG group. In the 2007 to 2008, 2010 period and for all-age and aged ≥65 years, the China-CABG group had higher mortality than the US-CABG group (1.91% versus 1.58%, P =0.059; and 3.12% versus 2.20%, P =0.004) and significantly higher age-, sex-, and comorbidity-adjusted odds of death (odds ratio, 1.58; 95% confidential interval, 1.22-2.04; and odds ratio, 1.73; 95% confidential interval, 1.24-2.40). There were no significant mortality differences in the 2011 to 2013 period. For preoperative, postoperative, and total hospital stay, respectively, the median (interquartile range) length of stay across the entire study period between China-CABG and US-CABG groups were 9 (8) versus 1 (3), 9 (6) versus 6 (3), and 20 (12) versus 7 (5) days (all P China and the United States. The longer length of stay in China may represent an opportunity for improvement. © 2017 The Authors.

  1. The application of the central limit theorem and the law of large numbers to facial soft tissue depths: T-Table robustness and trends since 2008.

    Science.gov (United States)

    Stephan, Carl N

    2014-03-01

    By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.

  2. Number and cost of claims linked to minor cervical trauma in Europe: results from the comparative study by CEA, AREDOC and CEREDOC.

    Science.gov (United States)

    Chappuis, Guy; Soltermann, Bruno

    2008-10-01

    Comparative epidemiological study of minor cervical spine trauma (frequently referred to as whiplash injury) based on data from the Comité Européen des Assurances (CEA) gathered in ten European countries. To determine the incidence and expenditure (e.g., for assessment, treatment or claims) for minor cervical spine injury in the participating countries. Controversy still surrounds the basis on which symptoms following minor cervical spine trauma may develop. In particular, there is considerable disagreement with regard to a possible contribution of psychosocial factors in determining outcome. The role of compensation is also a source of constant debate. The method followed here is the comparison of the data from different areas of interest (e.g., incidence of minor cervical spine trauma, percentage of minor cervical spine trauma in relationship to the incidence of bodily trauma, costs for assessment or claims) from ten European countries. Considerable differences exist regarding the incidence of minor cervical spine trauma and related costs in participating countries. France and Finland have the lowest and Great Britain the highest incidence of minor cervical spine trauma. The number of claims following minor cervical spine trauma in Switzerland is around the European average; however, Switzerland has the highest expenditure per claim at an average cost of 35,000.00 euros compared to the European average of 9,000.00 euros. Furthermore, the mandatory accident insurance statistics in Switzerland show very large differences between German-speaking and French- or Italian-speaking parts of the country. In the latter the costs for minor cervical spine trauma expanded more than doubled in the period from 1990 to 2002, whereas in the German-speaking part they rose by a factor of five. All the countries participating in the study have a high standard of medical care. The differences in claims frequency and costs must therefore reflect a social phenomenon based on the

  3. A Comparative Assessment of Epidemiologically Different Cutaneous Leishmaniasis Outbreaks in Madrid, Spain and Tolima, Colombia: An Estimation of the Reproduction Number via a Mathematical Model

    Directory of Open Access Journals (Sweden)

    Anuj Mubayi

    2018-04-01

    Full Text Available Leishmaniasis is a neglected tropical disease caused by the Leishmania parasite and transmitted by the Phlebotominae subfamily of sandflies, which infects humans and other mammals. Clinical manifestations of the disease include cutaneous leishmaniasis (CL, mucocutaneous leishmaniasis (MCL and visceral leishmaniasis (VL with a majority (more than three-quarters of worldwide cases being CL. There are a number of risk factors for CL, such as the presence of multiple reservoirs, the movement of individuals, inequality, and social determinants of health. However, studies related to the role of these factors in the dynamics of CL have been limited. In this work, we (i develop and analyze a vector-borne epidemic model to study the dynamics of CL in two ecologically distinct CL-affected regions—Madrid, Spain and Tolima, Colombia; (ii derived three different methods for the estimation of model parameters by reducing the dimension of the systems; (iii estimated reproduction numbers for the 2010 outbreak in Madrid and the 2016 outbreak in Tolima; and (iv compared the transmission potential of the two economically-different regions and provided different epidemiological metrics that can be derived (and used for evaluating an outbreak, once R0 is known and additional data are available. On average, Spain has reported only a few hundred CL cases annually, but in the course of the outbreak during 2009–2012, a much higher number of cases than expected were reported and that too in the single city of Madrid. Cases in humans were accompanied by sharp increase in infections among domestic dogs, the natural reservoir of CL. On the other hand, CL has reemerged in Colombia primarily during the last decade, because of the frequent movement of military personnel to domestic regions from forested areas, where they have increased exposure to vectors. In 2016, Tolima saw an unexpectedly high number of cases leading to two successive outbreaks. On comparing, we

  4. Accuracy of Skin Cancer Diagnosis by Physician Assistants Compared With Dermatologists in a Large Health Care System.

    Science.gov (United States)

    Anderson, Alyce M; Matsumoto, Martha; Saul, Melissa I; Secrest, Aaron M; Ferris, Laura K

    2018-05-01

    Physician assistants (PAs) are increasingly used in dermatology practices to diagnose skin cancers, although, to date, their diagnostic accuracy compared with board-certified dermatologists has not been well studied. To compare diagnostic accuracy for skin cancer of PAs with that of dermatologists. Medical record review of 33 647 skin cancer screening examinations in 20 270 unique patients who underwent screening at University of Pittsburgh Medical Center-affiliated dermatology offices from January 1, 2011, to December 31, 2015. International Classification of Diseases, Ninth Revision code V76.43 and International Classification of Diseases and Related Health Problems, Tenth Revision code Z12.83 were used to identify pathology reports from skin cancer screening examinations by dermatologists and PAs. Examination performed by a PA or dermatologist. Number needed to biopsy (NNB) to diagnose skin cancer (nonmelanoma, invasive melanoma, or in situ melanoma). Of 20 270 unique patients, 12 722 (62.8%) were female, mean (SD) age at the first visit was 52.7 (17.4) years, and 19 515 patients (96.3%) self-reported their race/ethnicity as non-Hispanic white. To diagnose 1 case of skin cancer, the NNB was 3.9 for PAs and 3.3 for dermatologists (P < .001). Per diagnosed melanoma, the NNB was 39.4 for PAs and 25.4 for dermatologists (P = .007). Patients screened by a PA were significantly less likely than those screened by a dermatologist to be diagnosed with melanoma in situ (1.1% vs 1.8% of visits, P = .02), but differences were not significant for invasive melanoma (0.7% vs 0.8% of visits, P = .83) or nonmelanoma skin cancer (6.1% vs 6.1% of visits, P = .98). Compared with dermatologists, PAs performed more skin biopsies per case of skin cancer diagnosed and diagnosed fewer melanomas in situ, suggesting that the diagnostic accuracy of PAs may be lower than that of dermatologists. Although the availability of PAs may help increase access to care and reduce

  5. Comparative Genomics of Chrysochromulina Ericina Virus and Other Microalga-Infecting Large DNA Viruses Highlights Their Intricate Evolutionary Relationship with the Established Mimiviridae Family.

    Science.gov (United States)

    Gallot-Lavallée, Lucie; Blanc, Guillaume; Claverie, Jean-Michel

    2017-07-15

    Chrysochromulina ericina virus CeV-01B (CeV) was isolated from Norwegian coastal waters in 1998. Its icosahedral particle is 160 nm in diameter and encloses a 474-kb double-stranded DNA (dsDNA) genome. This virus, although infecting a microalga (the haptophyceae Haptolina ericina , formerly Chrysochromulina ericina ), is phylogenetically related to members of the Mimiviridae family, initially established with the acanthamoeba-infecting mimivirus and megavirus as prototypes. This family was later split into two genera ( Mimivirus and Cafeteriavirus ) following the characterization of a virus infecting the heterotrophic stramenopile Cafeteria roenbergensis (CroV). CeV, as well as two of its close relatives, which infect the unicellular photosynthetic eukaryotes Phaeocystis globosa (Phaeocystis globosa virus [PgV]) and Aureococcus anophagefferens (Aureococcus anophagefferens virus [AaV]), are currently unclassified by the International Committee on Viral Taxonomy (ICTV). The detailed comparative analysis of the CeV genome presented here confirms the phylogenetic affinity of this emerging group of microalga-infecting viruses with the Mimiviridae but argues in favor of their classification inside a distinct clade within the family. Although CeV, PgV, and AaV share more common features among them than with the larger Mimiviridae , they also exhibit a large complement of unique genes, attesting to their complex evolutionary history. We identified several gene fusion events and cases of convergent evolution involving independent lateral gene acquisitions. Finally, CeV possesses an unusual number of inteins, some of which are closely related despite being inserted in nonhomologous genes. This appears to contradict the paradigm of allele-specific inteins and suggests that the Mimiviridae are especially efficient in spreading inteins while enlarging their repertoire of homing genes. IMPORTANCE Although it infects the microalga Chrysochromulina ericina , CeV is more closely

  6. Identification of rare recurrent copy number variants in high-risk autism families and their prevalence in a large ASD population.

    Directory of Open Access Journals (Sweden)

    Nori Matsunami

    Full Text Available Structural variation is thought to play a major etiological role in the development of autism spectrum disorders (ASDs, and numerous studies documenting the relevance of copy number variants (CNVs in ASD have been published since 2006. To determine if large ASD families harbor high-impact CNVs that may have broader impact in the general ASD population, we used the Affymetrix genome-wide human SNP array 6.0 to identify 153 putative autism-specific CNVs present in 55 individuals with ASD from 9 multiplex ASD pedigrees. To evaluate the actual prevalence of these CNVs as well as 185 CNVs reportedly associated with ASD from published studies many of which are insufficiently powered, we designed a custom Illumina array and used it to interrogate these CNVs in 3,000 ASD cases and 6,000 controls. Additional single nucleotide variants (SNVs on the array identified 25 CNVs that we did not detect in our family studies at the standard SNP array resolution. After molecular validation, our results demonstrated that 15 CNVs identified in high-risk ASD families also were found in two or more ASD cases with odds ratios greater than 2.0, strengthening their support as ASD risk variants. In addition, of the 25 CNVs identified using SNV probes on our custom array, 9 also had odds ratios greater than 2.0, suggesting that these CNVs also are ASD risk variants. Eighteen of the validated CNVs have not been reported previously in individuals with ASD and three have only been observed once. Finally, we confirmed the association of 31 of 185 published ASD-associated CNVs in our dataset with odds ratios greater than 2.0, suggesting they may be of clinical relevance in the evaluation of children with ASDs. Taken together, these data provide strong support for the existence and application of high-impact CNVs in the clinical genetic evaluation of children with ASD.

  7. Pyrosequencing-based comparative genome analysis of the nosocomial pathogen Enterococcus faecium and identification of a large transferable pathogenicity island

    Directory of Open Access Journals (Sweden)

    Bonten Marc JM

    2010-04-01

    Full Text Available Abstract Background The Gram-positive bacterium Enterococcus faecium is an important cause of nosocomial infections in immunocompromized patients. Results We present a pyrosequencing-based comparative genome analysis of seven E. faecium strains that were isolated from various sources. In the genomes of clinical isolates several antibiotic resistance genes were identified, including the vanA transposon that confers resistance to vancomycin in two strains. A functional comparison between E. faecium and the related opportunistic pathogen E. faecalis based on differences in the presence of protein families, revealed divergence in plant carbohydrate metabolic pathways and oxidative stress defense mechanisms. The E. faecium pan-genome was estimated to be essentially unlimited in size, indicating that E. faecium can efficiently acquire and incorporate exogenous DNA in its gene pool. One of the most prominent sources of genomic diversity consists of bacteriophages that have integrated in the genome. The CRISPR-Cas system, which contributes to immunity against bacteriophage infection in prokaryotes, is not present in the sequenced strains. Three sequenced isolates carry the esp gene, which is involved in urinary tract infections and biofilm formation. The esp gene is located on a large pathogenicity island (PAI, which is between 64 and 104 kb in size. Conjugation experiments showed that the entire esp PAI can be transferred horizontally and inserts in a site-specific manner. Conclusions Genes involved in environmental persistence, colonization and virulence can easily be aquired by E. faecium. This will make the development of successful treatment strategies targeted against this organism a challenge for years to come.

  8. Comparative study of large scale simulation of underground explosions inalluvium and in fractured granite using stochastic characterization

    Science.gov (United States)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2014-12-01

    This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.

  9. The large-scale blast score ratio (LS-BSR pipeline: a method to rapidly compare genetic content between bacterial genomes

    Directory of Open Access Journals (Sweden)

    Jason W. Sahl

    2014-04-01

    Full Text Available Background. As whole genome sequence data from bacterial isolates becomes cheaper to generate, computational methods are needed to correlate sequence data with biological observations. Here we present the large-scale BLAST score ratio (LS-BSR pipeline, which rapidly compares the genetic content of hundreds to thousands of bacterial genomes, and returns a matrix that describes the relatedness of all coding sequences (CDSs in all genomes surveyed. This matrix can be easily parsed in order to identify genetic relationships between bacterial genomes. Although pipelines have been published that group peptides by sequence similarity, no other software performs the rapid, large-scale, full-genome comparative analyses carried out by LS-BSR.Results. To demonstrate the utility of the method, the LS-BSR pipeline was tested on 96 Escherichia coli and Shigella genomes; the pipeline ran in 163 min using 16 processors, which is a greater than 7-fold speedup compared to using a single processor. The BSR values for each CDS, which indicate a relative level of relatedness, were then mapped to each genome on an independent core genome single nucleotide polymorphism (SNP based phylogeny. Comparisons were then used to identify clade specific CDS markers and validate the LS-BSR pipeline based on molecular markers that delineate between classical E. coli pathogenic variant (pathovar designations. Scalability tests demonstrated that the LS-BSR pipeline can process 1,000 E. coli genomes in 27–57 h, depending upon the alignment method, using 16 processors.Conclusions. LS-BSR is an open-source, parallel implementation of the BSR algorithm, enabling rapid comparison of the genetic content of large numbers of genomes. The results of the pipeline can be used to identify specific markers between user-defined phylogenetic groups, and to identify the loss and/or acquisition of genetic information between bacterial isolates. Taxa-specific genetic markers can then be translated

  10. Managing Large Multidimensional Array Hydrologic Datasets : A Case Study Comparing NetCDF and SciDB

    NARCIS (Netherlands)

    Liu, H.; van Oosterom, P.J.M.; Hu, C.; Wang, Wen

    2016-01-01

    Management of large hydrologic datasets including storage, structuring, indexing and query is one of the crucial challenges in the era of big data. This research originates from a specific data query problem: time series extraction at specific locations takes a long time when a large

  11. Location and characterization of the warfarin binding site of human serum albumin A comparative study of two large fragments

    NARCIS (Netherlands)

    Bos, O.J.M.; Remijn, J.P.M.; Fischer, M.J.E.; Wilting, J.; Janssen, L.H.M.

    1988-01-01

    The warfarin binding behaviour of a large tryptic fragment (residues 198–585 which comprise domains two and three) and of a large peptic fragment (residues 1–387 which comprise domains one and two) of human serum albumin has been studied by circular dichroism and equilibrium dialysis in order to

  12. On the calculation of line strengths, oscillator strengths and lifetimes for very large principal quantum numbers in hydrogenic atoms and ions by the McLean–Watson formula

    International Nuclear Information System (INIS)

    Hey, J D

    2014-01-01

    As a sequel to an earlier study (Hey 2009 J. Phys. B: At. Mol. Opt. Phys. 42 125701), we consider further the application of the line strength formula derived by Watson (2006 J. Phys. B: At. Mol. Opt. Phys. 39 L291) to transitions arising from states of very high principal quantum number in hydrogenic atoms and ions (Rydberg–Rydberg transitions, n > 1000). It is shown how apparent difficulties associated with the use of recurrence relations, derived (Hey 2006 J. Phys. B: At. Mol. Opt. Phys. 39 2641) by the ladder operator technique of Infeld and Hull (1951 Rev. Mod. Phys. 23 21), may be eliminated by a very simple numerical device, whereby this method may readily be applied up to n ≈ 10 000. Beyond this range, programming of the method may entail greater care and complexity. The use of the numerically efficient McLean–Watson formula for such cases is again illustrated by the determination of radiative lifetimes and comparison of present results with those from an asymptotic formula. The question of the influence on the results of the omission or inclusion of fine structure is considered by comparison with calculations based on the standard Condon–Shortley line strength formula. Interest in this work on the radial matrix elements for large n and n′ is related to measurements of radio recombination lines from tenuous space plasmas, e.g. Stepkin et al (2007 Mon. Not. R. Astron. Soc. 374 852), Bell et al (2011 Astrophys. Space Sci. 333 377), to the calculation of electron impact broadening parameters for such spectra (Watson 2006 J. Phys. B: At. Mol. Opt. Phys. 39 1889) and comparison with other theoretical methods (Peach 2014 Adv. Space Res. in press), to the modelling of physical processes in H II regions (Roshi et al 2012 Astrophys. J. 749 49), and the evaluation bound–bound transitions from states of high n during primordial cosmological recombination (Grin and Hirata 2010 Phys. Rev. D 81 083005, Ali-Haïmoud and Hirata 2010 Phys. Rev. D 82 063521

  13. Comparing the Effectiveness of Self-Paced and Collaborative Frame-of-Reference Training on Rater Accuracy in a Large-Scale Writing Assessment

    Science.gov (United States)

    Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu

    2015-01-01

    There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…

  14. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    NARCIS (Netherlands)

    J. Vogt (Julia); K. Bengesser (Kathrin); K.B.M. Claes (Kathleen B.M.); K. Wimmer (Katharina); V.-F. Mautner (Victor-Felix); R. van Minkelen (Rick); E. Legius (Eric); H. Brems (Hilde); M. Upadhyaya (Meena); J. Högel (Josef); C. Lazaro (Conxi); T. Rosenbaum (Thorsten); S. Bammert (Simone); L. Messiaen (Ludwine); D.N. Cooper (David); H. Kehrer-Sawatzki (Hildegard)

    2014-01-01

    textabstractBackground: Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The

  15. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  16. Comparative analysis of the number of neurofilaments in rat sciatic nerve undergoing neuropraxia treated by low-level laser and therapeutic ultrasound

    International Nuclear Information System (INIS)

    Matamala, F; Cornejo, R; Paredes, M; Farfan, E; Garrido, O. S; Alves, N

    2014-01-01

    Therapy by low-level laser (LLL) or ultrasound (US) are commonly used as treatment after nerve crush. The aim of this study was to determine the effectiveness of such treatments to repair the neuronal cytoskeleton evaluating the variation in the number of neurofilaments. For this an experimental design was performed, which involved 30 rats divided into 6 groups: 1 - control healthy; 2 - control injured; 3 - irradiated by LLL 2 J/cm2; 4 - irradiated by LLL 10 J/cm2; 5 - irradiated by US 0.5 W/cm2 and 6 - irradiated by US 1W/cm2. With the exception of group 1 all specimens were anesthetized and underwent right sciatic nerve compression using 40N pressure for 45 seconds. Twenty-four hours after compression irradiation was started by LLL and US according protocol. In our research we found that the increase in the number of neurofilaments was related to the applied dose of LLL and US. The average value of neurofilaments / 0.25 mm2 obtained in each group was: 1 - 128; 2-100; 3-156; 4-140; 5-100; 6-148. We concluded that the application of LLL and therapeutic US increases the number of neurofilaments in rat sciatic nerve undergoing neuropraxia, with LLL being more effective compared to the US. Furthermore we concluded that the effectiveness of therapies to induce regeneration of injured nerve is related to the type of protocol used, demonstrating the need to establish an adequate radiation dose with the purpose of obtaining the best therapeutic response, thus achieving successful treatment [es

  17. Comparative study on the influence of depth, number and arrangement of dimples on the flow and heat transfer characteristics at turbulent flow regimes

    Science.gov (United States)

    Nazari, Saeed; Zamani, Mahdi; Moshizi, Sajad A.

    2018-03-01

    The ensuing study is dedicated to a series of numerical investigations concerning the effects of various geometric parameters of dimpled plates on the flow structure and heat transfer performance in a rectangular duct compared to the smooth plate. These parameters are the arrangement, number and depth of dimples. Two widely used staggered and square patterns in addition to a triangular arrangement, and three dimple depths (Δ = δ/d = 0.25, 0.375 and 0.5) have been chosen for this particular study. All studies have been conducted at three different Reynolds numbers Re = 25,000, 50,000 and 100,000. In order to capture the flow structures in the vicinity of dimples and contributing phenomena related to the boundary layer interactions, fully structured grids with y+ < 1 have been generated for all the cases. The realizable k t -ɛ two-layer model was selected as a proper turbulent model. It can be observed from the obtained results that higher effective area for heat transfer and a myriad of turbulent vortices mixing the hot fluid near the surface with the passing cold fluid generated from the downwind rims of dimples are the causes for improved average Nusselt number in the dimpled surface in comparison to the smooth plate. However, more pressure loss due to the higher friction drag and recirculation zones inside dimples will exist as a drawback in this system. Moreover, for all arrangements increasing dimple ratio Δ has a negative impact on the heat transfer augmentation and also deteriorates the pressure loss, which leads to this fact that Δ = 0.25 serves as the best option for the dimple depth.

  18. Comparative study between RDC number 20 from ANVISA of 2006 and standard CNEN NN 6.10 of 2014 on radiotherapy services

    International Nuclear Information System (INIS)

    Silva, D.R.; Geraldo, J.M.; Batista, A.S.M.

    2017-01-01

    Introduction: The internal procedures of a Radiotherapy Service are performed based on resolutions and standards of the control bodies, both health and radiation use. In the health area, the control is carried out by the National Health Surveillance Agency (ANVISA), through the Resolution of the Collegiate Board of Directors (RDC) number 20 of 2006. On the other hand, because it is a service that uses high energy ionizing radiation, it must comply with the rules of the National Nuclear Energy Commission (CNEN) specifically CNEN NN 6.10 of 2014. It is therefore necessary to integrate the recommendations contained in the ANVISA and CNEN determinations, requiring interpretation and transposition effort for the internal regime of procedures of each institution. Methods: The objective of this study was to compare, discuss and interpret RDC number 20 and CNEN NN 6.10 in relation to the understanding of how the two contribute to the applicability of radioprotection and radiation therapy rules. Results: Tables are presented in constant or not in the two documents and evaluated the contributions and focus of each of them. Conclusion: It is noted that each text is reconciled with the interests of each legislator and supervisory body, that is, CNEN and ANVISA. In the control of the use of radioactive sources and emitting equipment, in the case of CNEN, and in the control of agents harmful to health, ANVISA

  19. Comparative Study of Surface-lattice-site Resolved Neutralization of Slow Multicharged Ions during Large-angle Quasi-binary Collisions with Au(110): Simulation and Experiment

    International Nuclear Information System (INIS)

    Meyer, F.W.

    2001-01-01

    In this article we extend our earlier studies of the azimuthal dependences of low energy projectiles scattered in large angle quasi-binary collisions from Au(110). Measurements are presented for 20 keV Ar 9+ at normal incidence, which are compared with our earlier measurements for this ion at 5 keV and 10 0 incidence angle. A deconvolution procedure based on MARLOWE simulation results carried out at both energies provides information about the energy dependence of projectile neutralization during interactions just with the atoms along the top ridge of the reconstructed Au(110) surface corrugation, in comparison to, e.g., interactions with atoms lying on the sidewalls. To test the sensitivity of the agreement between the MARLOWE results and the experimental measurements, we show simulation results obtained for a non-reconstructed Au(110) surface with 20 keV Ar projectiles, and for different scattering potentials that are intended to simulate the effects on scattering trajectory of a projectile inner shell vacancy surviving the binary collision, In addition, simulation results are shown for a number of different total scattering angles, to illustrate their utility in finding optimum values for this parameter prior to the actual measurements

  20. What determines area burned in large landscapes? Insights from a decade of comparative landscape-fire modelling

    Science.gov (United States)

    Geoffrey J. Cary; Robert E. Keane; Mike D. Flannigan; Ian D. Davies; Russ A. Parsons

    2015-01-01

    Understanding what determines area burned in large landscapes is critical for informing wildland fire management in fire-prone environments and for representing fire activity in Dynamic Global Vegetation Models. For the past ten years, a group of landscape-fire modellers have been exploring the relative influence of key determinants of area burned in temperate and...

  1. What causes differences between national estimates of forest management carbon emissions and removals compared to estimates of large - scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.; Köthke, M.; Lehtonen, A.; Nabuurs, G.J; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  2. Assessment of delta ferrite in multipass TIG welds of 40 mm thick SS 316L: A comparative study of ferrite number (FN) prediction and measurements

    Science.gov (United States)

    Buddu, Ramesh Kumar; Raole, P. M.; Sarkar, B.

    2017-04-01

    Austenitic stainless steels are widely used in the fabrication of fusion reactor major systems like vacuum vessel, divertor, cryostat and other structural components development. Multipass welding is used for the development of thick plates for the structural components fabrication. Due to the repeated weld thermal cycles, the microstructure adversely alters owing to the presence of complex phases like austenite, ferrite and delta ferrite and subsequently influences the mechanical properties like tensile and impact toughness of joints. The present paper reports the detail analysis of delta ferrite phase in welded region of 40 mm thick SS316L plates welded by special design multipass narrow groove TIG welding process under three different heat input conditions. The correlation of delta ferrite microstructure of different type structures acicular and vermicular is observed. The chemical composition of weld samples was used to predict the Ferrite Number (FN), which is representative form of delta ferrite in welds, with Schaeffler’s, WRC-1992 diagram and DeLong techniques by calculating the Creq and Nieq ratios and compared with experimental data of FN from Feritescope measurements. The low heat input conditions (1.67 kJ/mm) have produced higher FN (7.28), medium heat input (1.72 kJ/mm) shown FN (7.04) where as high heat input (1.87 kJ/mm) conditions has shown FN (6.68) decreasing trend and FN data is compared with the prediction methods.

  3. Physiotherapists use a small number of behaviour change techniques when promoting physical activity: A systematic review comparing experimental and observational studies.

    Science.gov (United States)

    Kunstler, Breanne E; Cook, Jill L; Freene, Nicole; Finch, Caroline F; Kemp, Joanne L; O'Halloran, Paul D; Gaida, James E

    2018-06-01

    Physiotherapists promote physical activity as part of their practice. This study reviewed the behaviour change techniques physiotherapists use when promoting physical activity in experimental and observational studies. Systematic review of experimental and observational studies. Twelve databases were searched using terms related to physiotherapy and physical activity. We included experimental studies evaluating the efficacy of physiotherapist-led physical activity interventions delivered to adults in clinic-based private practice and outpatient settings to individuals with, or at risk of, non-communicable diseases. Observational studies reporting the techniques physiotherapists use when promoting physical activity were also included. The behaviour change techniques used in all studies were identified using the Behaviour Change Technique Taxonomy. The behaviour change techniques appearing in efficacious and inefficacious experimental interventions were compared using a narrative approach. Twelve studies (nine experimental and three observational) were retained from the initial search yield of 4141. Risk of bias ranged from low to high. Physiotherapists used seven behaviour change techniques in the observational studies, compared to 30 behaviour change techniques in the experimental studies. Social support (unspecified) was the most frequently identified behaviour change technique across both settings. Efficacious experimental interventions used more behaviour change techniques (n=29) and functioned in more ways (n=6) than did inefficacious experimental interventions (behaviour change techniques=10 and functions=1). Physiotherapists use a small number of behaviour change techniques. Less behaviour change techniques were identified in observational studies compared to experimental studies, suggesting physiotherapists use less BCTs clinically than experimentally. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  4. Development of special machines for production of large number of superconducting coils for the spool correctors for the main dipole of LHC

    International Nuclear Information System (INIS)

    Puntambekar, A.M.; Karmarkar, M.G.

    2003-01-01

    Superconducting (Sc) spool correctors of different types namely Sextupole, (MCS) Decapole (MCD) and Octupole (MCO) are incorporated in each of the main dipole of Large Hadron Collider (LHC). In all 2464 MCS and 1232 MCDO magnets are required to equip all 1232 Dipoles of LHC. The coils wound from thin rectangular section Sc wires are the heart of magnet assembly and its performance for the field quality and cold quench training largely depends on the precise and robust construction of these coils. Under DAE-CERN collaboration CAT was entrusted with the responsibility of making these magnets for LHC. Starting with development of manual fixtures and prototyping using soldering, a more advances special Automatic Coils Winding and Ultrasonic Welding (USW) system for production of large no. of coils and magnets were built at CAT. The paper briefly describes the various developments in this area. (author)

  5. Experimental observations of electron-backscatter effects from high-atomic-number anodes in large-aspect-ratio, electron-beam diodes

    Energy Technology Data Exchange (ETDEWEB)

    Cooperstein, G; Mosher, D; Stephanakis, S J; Weber, B V; Young, F C [Naval Research Laboratory, Washington, DC (United States); Swanekamp, S B [JAYCOR, Vienna, VA (United States)

    1997-12-31

    Backscattered electrons from anodes with high-atomic-number substrates cause early-time anode-plasma formation from the surface layer leading to faster, more intense electron beam pinching, and lower diode impedance. A simple derivation of Child-Langmuir current from a thin hollow cathode shows the same dependence on the diode aspect ratio as critical current. Using this fact, it is shown that the diode voltage and current follow relativistic Child-Langmuir theory until the anode plasma is formed, and then follows critical current after the beam pinches. With thin hollow cathodes, electron beam pinching can be suppressed at low voltages (< 800 kV) even for high currents and high-atomic-number anodes. Electron beam pinching can also be suppressed at high voltages for low-atomic-number anodes as long as the electron current densities remain below the plasma turn-on threshold. (author). 8 figs., 2 refs.

  6. Comparative Research of Extra-large-span Cable-stayed Bridge with Steel Truss Girder and Steel Box Girder

    Directory of Open Access Journals (Sweden)

    Tan Manjiang

    2015-01-01

    Full Text Available To research structural performance of extra-large-span cable-stayed bridge under different section forms, with the engineering background of a 800m main-span cable-stayed bridge with steel truss girder, the cable-stayed bridge with steel box girder is designed according to the current bridge regulations when two bridges are designed in an ultimate state of the carrying capacity, so the maximum stress and minimum stress of the stress envelope diagram are substantially the same. A comprehensive comparison is given to two types of bridge on the aspect of static force, natural vibration frequency, stability, economic performance and so on. Analysis results provide future reference for the large-span cable-stayed bridge to select between the steel truss girder and the steel box girder.

  7. Large-scale studies of the HphI insulin gene variable-number-of-tandem-repeats polymorphism in relation to Type 2 diabetes mellitus and insulin release

    DEFF Research Database (Denmark)

    Hansen, S K; Gjesing, A P; Rasmussen, S K

    2004-01-01

    The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose-induced ins......The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose...

  8. MANUFACTURING AND CONTINUOUS IMPROVEMENT PERFORMANCE LEVEL IN PLANTS OF MEXICO; A COMPARATIVE ANALYSIS AMONG LARGE AND MEDIUM SIZE PLANTS

    OpenAIRE

    Carlos Monge; Jesús Cruz

    2015-01-01

    A random and statistically significant sample of 40 medium (12) and large (28) manufacturing plants of Apodaca, Mexico were surveyed using a structured and validated questionnaire to investigate the level of implementation of lean manufacturing, sustainable manufacturing, continuous improvement and operational efficiency and environmental responsibility in them, it is important to mention it was found that performance in the mentioned philosophies, on the two categories of plants is low, howe...

  9. Patterns of variations in large pelagic fish: A comparative approach between the Indian and the Atlantic Oceans

    Science.gov (United States)

    Corbineau, A.; Rouyer, T.; Fromentin, J.-M.; Cazelles, B.; Fonteneau, A.; Ménard, F.

    2010-07-01

    Catch data of large pelagic fish such as tuna, swordfish and billfish are highly variable ranging from short to long term. Based on fisheries data, these time series are noisy and reflect mixed information on exploitation (targeting, strategy, fishing power), population dynamics (recruitment, growth, mortality, migration, etc.), and environmental forcing (local conditions or dominant climate patterns). In this work, we investigated patterns of variation of large pelagic fish (i.e. yellowfin tuna, bigeye tuna, swordfish and blue marlin) in Japanese longliners catch data from 1960 to 2004. We performed wavelet analyses on the yearly time series of each fish species in each biogeographic province of the tropical Indian and Atlantic Oceans. In addition, we carried out cross-wavelet analyses between these biological time series and a large-scale climatic index, i.e. the Southern Oscillation Index (SOI). Results showed that the biogeographic province was the most important factor structuring the patterns of variability of Japanese catch time series. Relationships between the SOI and the fish catches in the Indian and Atlantic Oceans also pointed out the role of climatic variability for structuring patterns of variation of catch time series. This work finally confirmed that Japanese longline CPUE data poorly reflect the underlying population dynamics of tunas.

  10. Planning Alternative Organizational Frameworks For a Large Scale Educational Telecommunications System Served by Fixed/Broadcast Satellites. Memorandum Number 73/3.

    Science.gov (United States)

    Walkmeyer, John

    Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…

  11. The number of extranodal sites assessed by PET/CT scan is a powerful predictor of CNS relapse for patients with diffuse large B-cell lymphoma

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; Villa, Diego; Michaelsen, Thomas Yssing

    2017-01-01

    Purpose Development of secondary central nervous system involvement (SCNS) in patients with diffuse large B-cell lymphoma is associated with poor outcomes. The CNS International Prognostic Index (CNS-IPI) has been proposed for identifying patients at greatest risk, but the optimal model is unknow...

  12. Handling large numbers of observation units in three-way methods for the analysis of qualitative and quantitative two-way data

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Marchetti, G.M.

    1994-01-01

    Recently, a number of methods have been proposed for the exploratory analysis of mixtures of qualitative and quantitative variables. In these methods for each variable an object by object similarity matrix is constructed, and these are consequently analyzed by means of three-way methods like

  13. A large increase of sour taste receptor cells in Skn-1-deficient mice does not alter the number of their sour taste signal-transmitting gustatory neurons.

    Science.gov (United States)

    Maeda, Naohiro; Narukawa, Masataka; Ishimaru, Yoshiro; Yamamoto, Kurumi; Misaka, Takumi; Abe, Keiko

    2017-05-01

    The connections between taste receptor cells (TRCs) and innervating gustatory neurons are formed in a mutually dependent manner during development. To investigate whether a change in the ratio of cell types that compose taste buds influences the number of innervating gustatory neurons, we analyzed the proportion of gustatory neurons that transmit sour taste signals in adult Skn-1a -/- mice in which the number of sour TRCs is greatly increased. We generated polycystic kidney disease 1 like 3-wheat germ agglutinin (pkd1l3-WGA)/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice by crossing Skn-1a -/- mice and pkd1l3-WGA transgenic mice, in which neural pathways of sour taste signals can be visualized. The number of WGA-positive cells in the circumvallate papillae is 3-fold higher in taste buds of pkd1l3-WGA/Skn-1a -/- mice relative to pkd1l3-WGA/Skn-1a +/+ mice. Intriguingly, the ratio of WGA-positive neurons to P2X 2 -expressing gustatory neurons in nodose/petrosal ganglia was similar between pkd1l3-WGA/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice. In conclusion, an alteration in the ratio of cell types that compose taste buds does not influence the number of gustatory neurons that transmit sour taste signals. Copyright © 2017. Published by Elsevier B.V.

  14. Mass attenuation coefficient (μ/ρ), effective atomic number (Zeff) and measurement of x-ray energy spectra using based calcium phosphate biomaterials: a comparative study

    International Nuclear Information System (INIS)

    Fernandes Z, M. A.; Da Silva, T. A.; Nogueira, M. S.; Goncalves Z, E.

    2015-10-01

    In dentistry, alveolar bone regeneration procedures using based calcium phosphate biomaterials have been shown effective. However,there are not reports in the literature of studies the interaction of low energy radiation in these biomaterials used as attenuator and not being then allowed a comparison between the theoretical values and experimental.The objective of this study was to determine the interaction of radiation parameters of four dental biomaterials - BioOss, Cerasorb M Dental, Straumann Boneceramic and Osteogen for diagnostic radiology qualities. As a material and methods, the composition of the biomaterials was determined by the analytical techniques. The samples with 0.181 cm to 0,297 cm thickness were experimentally used as attenuators for the measurement of the transmitted X-rays spectra in X-ray equipment with 50 to 90 kV range by spectrometric system comprising the Cd Te detector. After this procedure, the mass attenuation coefficient, the effective atomic number were determined and compared between all the specimens analyzed, using the program WinXCOM in the range of 10 to 200 keV. In all strains examined observed that the energy spectrum of x-rays transmitted through the BioOss has the mean energy slightly smaller than the others biomaterials for close thickness. The μ/ρ and Z eff of the biomaterials showed its dependence on photon energy and atomic number of the elements of the material analyzed. It is concluded according to the methodology employed in this study that the measurements of x-ray spectrum, μ/ρ and Z eff using biomaterials as attenuators confirmed that the thickness, density, composition of the samples, the incident photon energy are factors that determine the characteristics of radiation in a tissue or equivalent material. (Author)

  15. A comparative study of the impact of ERP systems implementation on large companies in Slovakia and Slovenia

    DEFF Research Database (Denmark)

    Sudzina, Frantisek; Pucihar, Andreja; Lenart, Gregor

    2011-01-01

    a significant difference in the impact of ERP systems implementation on overall IS/IT costs can be found, as well as on the proportion of the IT/IS costs attributed to IT and other departments, on efficiency as profitability, on effectiveness as productivity, and on the availability of IS/IT services....... The research is based on data from large Slovak and Slovenian companies. The models control for the extent and successfulness of the ERP system implementation and for the IT focus of the company....

  16. Design and Fabrication of 3D printed Scaffolds with a Mechanical Strength Comparable to Cortical Bone to Repair Large Bone Defects

    OpenAIRE

    Roohani-Esfahani, Seyed-Iman; Newman, Peter; Zreiqat, Hala

    2016-01-01

    A challenge in regenerating large bone defects under load is to create scaffolds with large and interconnected pores while providing a compressive strength comparable to cortical bone (100?150?MPa). Here we design a novel hexagonal architecture for a glass-ceramic scaffold to fabricate an anisotropic, highly porous three dimensional scaffolds with a compressive strength of 110?MPa. Scaffolds with hexagonal design demonstrated a high fatigue resistance (1,000,000 cycles at 1?10?MPa compressive...

  17. Forty-Five-Year Mortality Rate as a Function of the Number and Type of Psychiatric Diagnoses Found in a Large Danish Birth Cohort

    DEFF Research Database (Denmark)

    Madarasz, Wendy; Manzardo, Ann; Mortensen, Erik Lykke

    2012-01-01

    Central Psychiatric Research Registry for 8109 birth cohort members aged 45 years. Lifetime psychiatric diagnoses (International Classification of Diseases, Revision 10, group F codes, Mental and Behavioural Disorders, and one Z code) for identified subjects were organized into 14 mutually exclusive......Objective: Psychiatric comorbidities are common among psychiatric patients and typically associated with poorer clinical prognoses. Subjects of a large Danish birth cohort were used to study the relation between mortality and co-occurring psychiatric diagnoses. Method: We searched the Danish...

  18. Load Frequency Control by use of a Number of Both Heat Pump Water Heaters and Electric Vehicles in Power System with a Large Integration of Renewable Energy Sources

    Science.gov (United States)

    Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko

    In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.

  19. Comparative evaluation of direct plating and most probable number for enumeration of low levels of Listeria monocytogenes in naturally contaminated ice cream products.

    Science.gov (United States)

    Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru

    2017-01-16

    A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.

  20. Quench protection test results and comparative simulations on the first 10 meter prototype dipoles for the Large Hadron Collider

    International Nuclear Information System (INIS)

    Rodriguez-Mateos, F.; Gerin, G.; Marquis, A.

    1996-01-01

    The first 10 meter long dipole prototypes made by European Industry within the framework of the R and D program for the Large Hadron Collider (LHC) have been tested at CERN. As a part of the test program, a series of quench protection tests have been carried out in order to qualify the basic protection scheme foreseen for the LHC dipoles (quench heaters and cold diodes). Results are presented on the quench heater performance, and on the maximum temperatures and voltages observed during quenches under the so-called machine conditions. Moreover, an update of the quench simulation package specially developed at CERN (QUABER 2) has been recently made. Details on this new version of QUABER are given. Simulation runs have been made specifically to validate the model with the results from the measurements on quench protection mentioned above

  1. Modelling of natural convection flows with large temperature differences: a benchmark problem for low Mach number solvers. Part. 1 reference solutions

    International Nuclear Information System (INIS)

    Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.

    2005-01-01

    Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)

  2. Prediction of the number of 14 MeV neutron elastically scattered from large sample of aluminium using Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Husin Wagiran; Wan Mohd Nasir Wan Kadir

    1997-01-01

    In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results

  3. Implementation of genomic recursions in single-step genomic best linear unbiased predictor for US Holsteins with a large number of genotyped animals.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J

    2016-03-01

    The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1

  4. Comparative study of Monte Carlo particle transport code PHITS and nuclear data processing code NJOY for PKA energy spectra and heating number under neutron irradiation

    International Nuclear Information System (INIS)

    Iwamoto, Y.; Ogawa, T.

    2016-01-01

    The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials

  5. A modification to linearized theory for prediction of pressure loadings on lifting surfaces at high supersonic Mach numbers and large angles of attack

    Science.gov (United States)

    Carlson, H. W.

    1979-01-01

    A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.

  6. Comparative Performance in Single-Port Versus Multiport Minimally Invasive Surgery, and Small Versus Large Operative Working Spaces: A Preclinical Randomized Crossover Trial.

    Science.gov (United States)

    Marcus, Hani J; Seneci, Carlo A; Hughes-Hallett, Archie; Cundy, Thomas P; Nandi, Dipankar; Yang, Guang-Zhong; Darzi, Ara

    2016-04-01

    Surgical approaches such as transanal endoscopic microsurgery, which utilize small operative working spaces, and are necessarily single-port, are particularly demanding with standard instruments and have not been widely adopted. The aim of this study was to compare simultaneously surgical performance in single-port versus multiport approaches, and small versus large working spaces. Ten novice, 4 intermediate, and 1 expert surgeons were recruited from a university hospital. A preclinical randomized crossover study design was implemented, comparing performance under the following conditions: (1) multiport approach and large working space, (2) multiport approach and intermediate working space, (3) single-port approach and large working space, (4) single-port approach and intermediate working space, and (5) single-port approach and small working space. In each case, participants performed a peg transfer and pattern cutting tasks, and each task repetition was scored. Intermediate and expert surgeons performed significantly better than novices in all conditions (P Performance in single-port surgery was significantly worse than multiport surgery (P performance in the intermediate versus large working space. In single-port surgery, there was a converse trend; performances in the intermediate and small working spaces were significantly better than in the large working space. Single-port approaches were significantly more technically challenging than multiport approaches, possibly reflecting loss of instrument triangulation. Surprisingly, in single-port approaches, in which triangulation was no longer a factor, performance in large working spaces was worse than in intermediate and small working spaces. © The Author(s) 2015.

  7. High Frequency Design Considerations for the Large Detector Number and Small Form Factor Dual Electron Spectrometer of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Kujawski, Joseph T.; Gliese, Ulrik B.; Cao, N. T.; Zeuch, M. A.; White, D.; Chornay, D. J; Lobell, J. V.; Avanov, L. A.; Barrie, A. C.; Mariano, A. J.; hide

    2015-01-01

    Each half of the Dual Electron Spectrometer (DES) of the Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission utilizes a microchannel plate Chevron stack feeding 16 separate detection channels each with a dedicated anode and amplifier/discriminator chip. The desire to detect events on a single channel with a temporal spacing of 100 ns and a fixed dead-time drove our decision to use an amplifier/discriminator with a very fast (GHz class) front end. Since the inherent frequency response of each pulse in the output of the DES microchannel plate system also has frequency components above a GHz, this produced a number of design constraints not normally expected in electronic systems operating at peak speeds of 10 MHz. Additional constraints are imposed by the geometry of the instrument requiring all 16 channels along with each anode and amplifier/discriminator to be packaged in a relatively small space. We developed an electrical model for board level interactions between the detector channels to allow us to design a board topology which gave us the best detection sensitivity and lowest channel to channel crosstalk. The amplifier/discriminator output was designed to prevent the outputs from one channel from producing triggers on the inputs of other channels. A number of Radio Frequency design techniques were then applied to prevent signals from other subsystems (e.g. the high voltage power supply, command and data handling board, and Ultraviolet stimulation for the MCP) from generating false events. These techniques enabled us to operate the board at its highest sensitivity when operated in isolation and at very high sensitivity when placed into the overall system.

  8. Large-Scale Urban Projects, Production of Space and Neo-liberal Hegemony: A Comparative Study of Izmir

    Directory of Open Access Journals (Sweden)

    Mehmet PENPECİOĞLU

    2013-04-01

    Full Text Available With the rise of neo-liberalism, large-scale urban projects (LDPs have become a powerful mechanism of urban policy. Creating spaces of neo-liberal urbanization such as central business districts, tourism centers, gated residences and shopping malls, LDPs play a role not only in the reproduction of capital accumulation relations but also in the shift of urban political priorities towards the construction of neo-liberal hegemony. The construction of neo-liberal hegemony and the role played by LDPs in this process could not only be investigated by the analysis of capital accumulation. For such an investigation; the role of state and civil society actors in LDPs, their collaborative and conflictual relationships should be researched and their functions in hegemony should be revealed. In the case of Izmir’s two LDPs, namely the New City Center (NCC and Inciraltı Tourism Center (ITC projects, this study analyzes the relationship between the production of space and neo-liberal hegemony. In the NCC project, local governments, investors, local capital organizations and professional chambers collaborated and disseminated hegemonic discourse, which provided social support for the project. Through these relationships and discourses, the NCC project has become a hegemonic project for producing space and constructed neo-liberal hegemony over urban political priorities. In contrast to the NCC project, the ITC project saw no collaboration between state and organized civil society actors. The social opposition against the ITC project, initiated by professional chambers, has brought legal action against the ITC development plans in order to prevent their implementation. As a result, the ITC project did not acquire the consent of organized social groups and failed to become a hegemonic project for producing space.

  9. Large scale spatial risk and comparative prevalence of Borrelia miyamotoi and Borrelia burgdorferi sensu lato in Ixodes pacificus.

    Directory of Open Access Journals (Sweden)

    Kerry Padgett

    Full Text Available Borrelia miyamotoi is a newly described emerging pathogen transmitted to people by Ixodes species ticks and found in temperate regions of North America, Europe, and Asia. There is limited understanding of large scale entomological risk patterns of B. miyamotoi and of Borreila burgdorferi sensu stricto (ss, the agent of Lyme disease, in western North America. In this study, B. miyamotoi, a relapsing fever spirochete, was detected in adult (n=70 and nymphal (n=36 Ixodes pacificus ticks collected from 24 of 48 California counties that were surveyed over a 13 year period. Statewide prevalence of B. burgdorferi sensu lato (sl, which includes B. burgdorferi ss, and B. miyamotoi were similar in adult I. pacificus (0.6% and 0.8%, respectively. In contrast, the prevalence of B. burgdorferi sl was almost 2.5 times higher than B. miyamotoi in nymphal I. pacificus (3.2% versus 1.4%. These results suggest similar risk of exposure to B. burgdorferi sl and B. miyamotoi from adult I. pacificus tick bites in California, but a higher risk of contracting B. burgdorferi sl than B. miyamotoi from nymphal tick bites. While regional risk of exposure to these two spirochetes varies, the highest risk for both species is found in north and central coastal California and the Sierra Nevada foothill region, and the lowest risk is in southern California; nevertheless, tick-bite avoidance measures should be implemented in all regions of California. This is the first study to comprehensively evaluate entomologic risk for B. miyamotoi and B. burgdorferi for both adult and nymphal I. pacificus, an important human biting tick in western North America.

  10. Large Diversity of Porcine Yersinia enterocolitica 4/O:3 in Eight European Countries Assessed by Multiple-Locus Variable-Number Tandem-Repeat Analysis.

    Science.gov (United States)

    Alakurtti, Sini; Keto-Timonen, Riikka; Virtanen, Sonja; Martínez, Pilar Ortiz; Laukkanen-Ninios, Riikka; Korkeala, Hannu

    2016-06-01

    A total of 253 multiple-locus variable-number tandem-repeat analysis (MLVA) types among 634 isolates were discovered while studying the genetic diversity of porcine Yersinia enterocolitica 4/O:3 isolates from eight different European countries. Six variable-number tandem-repeat (VNTR) loci V2A, V4, V5, V6, V7, and V9 were used to study the isolates from 82 farms in Belgium (n = 93, 7 farms), England (n = 41, 8 farms), Estonia (n = 106, 12 farms), Finland (n = 70, 13 farms), Italy (n = 111, 20 farms), Latvia (n = 66, 3 farms), Russia (n = 60, 10 farms), and Spain (n = 87, 9 farms). Cluster analysis revealed mainly country-specific clusters, and only one MLVA type consisting of two isolates was found from two countries: Russia and Italy. Also, farm-specific clusters were discovered, but same MLVA types could also be found from different farms. Analysis of multiple isolates originating either from the same tonsils (n = 4) or from the same farm, but 6 months apart, revealed both identical and different MLVA types. MLVA showed a very good discriminatory ability with a Simpson's discriminatory index (DI) of 0.989. DIs for VNTR loci V2A, V4, V5, V6, V7, and V9 were 0.916, 0.791, 0.901, 0.877, 0.912, and 0.785, respectively, when studying all isolates together, but variation was evident between isolates originating from different countries. Locus V4 in the Spanish isolates and locus V9 in the Latvian isolates did not differentiate (DI 0.000), and locus V9 in the English isolates showed very low discriminatory power (DI 0.049). The porcine Y. enterocolitica 4/O:3 isolates were diverse, but the variation in DI demonstrates that the well discriminating loci V2A, V5, V6, and V7 should be included in MLVA protocol when maximal discriminatory power is needed.

  11. Comparative genomics of 12 strains of Erwinia amylovora identifies a pan-genome with a large conserved core.

    Directory of Open Access Journals (Sweden)

    Rachel A Mann

    Full Text Available The plant pathogen Erwinia amylovora can be divided into two host-specific groupings; strains infecting a broad range of hosts within the Rosaceae subfamily Spiraeoideae (e.g., Malus, Pyrus, Crataegus, Sorbus and strains infecting Rubus (raspberries and blackberries. Comparative genomic analysis of 12 strains representing distinct populations (e.g., geographic, temporal, host origin of E. amylovora was used to describe the pan-genome of this major pathogen. The pan-genome contains 5751 coding sequences and is highly conserved relative to other phytopathogenic bacteria comprising on average 89% conserved, core genes. The chromosomes of Spiraeoideae-infecting strains were highly homogeneous, while greater genetic diversity was observed between Spiraeoideae- and Rubus-infecting strains (and among individual Rubus-infecting strains, the majority of which was attributed to variable genomic islands. Based on genomic distance scores and phylogenetic analysis, the Rubus-infecting strain ATCC BAA-2158 was genetically more closely related to the Spiraeoideae-infecting strains of E. amylovora than it was to the other Rubus-infecting strains. Analysis of the accessory genomes of Spiraeoideae- and Rubus-infecting strains has identified putative host-specific determinants including variation in the effector protein HopX1(Ea and a putative secondary metabolite pathway only present in Rubus-infecting strains.

  12. SCIENTIFIC AND EDUCATIONAL GEOPORTAL AS INSTRUMENT OF INTEGRATION OF RESULTS OF SCIENTIFIC RESEARCHES OF THE REPUBLIC OF BASHKORTOSTAN BY THE LARGE NUMBER OF USERS

    Directory of Open Access Journals (Sweden)

    Olga I. Hristodulo

    2015-01-01

    Full Text Available The article covers the urgency of establishing a scientifi c and educational geoportal as a single data center for the Republic of Bashkortostan, providing quick access to a distributed network of geospatial data and geoservices to all responsible and interested parties. We considered the main tasks, functions and architecture of a scientifi c and educational geoportal for different types of users. We also carried out a comparative analysis of the basic technology for the development of mapping services and information systems, representing the major structural elements of geoportals. As an example, we considered information retrieval problems of the scientifi c and educational geoportal for the Republic of Bashkortostan. 

  13. The One-carbon Carrier Methylofuran from Methylobacterium extorquens AM1 Contains a Large Number of α- and γ-Linked Glutamic Acid Residues*

    Science.gov (United States)

    Hemmann, Jethro L.; Saurel, Olivier; Ochsner, Andrea M.; Stodden, Barbara K.; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A.

    2016-01-01

    Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a 13C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. PMID:26895963

  14. The One-carbon Carrier Methylofuran from Methylobacterium extorquens AM1 Contains a Large Number of α- and γ-Linked Glutamic Acid Residues.

    Science.gov (United States)

    Hemmann, Jethro L; Saurel, Olivier; Ochsner, Andrea M; Stodden, Barbara K; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A

    2016-04-22

    Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a (13)C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Distributed Kalman filtering compared to Fourier domain preconditioned conjugate gradient for laser guide star tomography on extremely large telescopes.

    Science.gov (United States)

    Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent

    2013-05-01

    This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.

  16. Suspect screening of large numbers of emerging contaminants in environmental waters using artificial neural networks for chromatographic retention time prediction and high resolution mass spectrometry data analysis.

    Science.gov (United States)

    Bade, Richard; Bijlsma, Lubertus; Miller, Thomas H; Barron, Leon P; Sancho, Juan Vicente; Hernández, Felix

    2015-12-15

    The recent development of broad-scope high resolution mass spectrometry (HRMS) screening methods has resulted in a much improved capability for new compound identification in environmental samples. However, positive identifications at the ng/L concentration level rely on analytical reference standards for chromatographic retention time (tR) and mass spectral comparisons. Chromatographic tR prediction can play a role in increasing confidence in suspect screening efforts for new compounds in the environment, especially when standards are not available, but reliable methods are lacking. The current work focuses on the development of artificial neural networks (ANNs) for tR prediction in gradient reversed-phase liquid chromatography and applied along with HRMS data to suspect screening of wastewater and environmental surface water samples. Based on a compound tR dataset of >500 compounds, an optimized 4-layer back-propagation multi-layer perceptron model enabled predictions for 85% of all compounds to within 2min of their measured tR for training (n=344) and verification (n=100) datasets. To evaluate the ANN ability for generalization to new data, the model was further tested using 100 randomly selected compounds and revealed 95% prediction accuracy within the 2-minute elution interval. Given the increasing concern on the presence of drug metabolites and other transformation products (TPs) in the aquatic environment, the model was applied along with HRMS data for preliminary identification of pharmaceutically-related compounds in real samples. Examples of compounds where reference standards were subsequently acquired and later confirmed are also presented. To our knowledge, this work presents for the first time, the successful application of an accurate retention time predictor and HRMS data-mining using the largest number of compounds to preliminarily identify new or emerging contaminants in wastewater and surface waters. Copyright © 2015 Elsevier B.V. All rights

  17. Extensive unusual lesions on a large number of immersed human victims found to be from cookiecutter sharks (Isistius spp.): an examination of the Yemenia plane crash.

    Science.gov (United States)

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Schuliar, Yves; Chapenoire, Stéphane; Crema, Enrico R; Claes, Julien; Seret, Bernard; Maleret, Vincent; Morgan, Ruth M

    2017-03-01

    Accurate determination of the origin and timing of trauma is key in medicolegal investigations when the cause and manner of death are unknown. However, distinction between criminal and accidental perimortem trauma and postmortem modifications can be challenging when facing unidentified trauma. Postmortem examination of the immersed victims of the Yemenia airplane crash (Comoros, 2009) demonstrated the challenges in diagnosing extensive unusual circular lesions found on the corpses. The objective of this study was to identify the origin and timing of occurrence (peri- or postmortem) of the lesions.A retrospective multidisciplinary study using autopsy reports (n = 113) and postmortem digital photos (n = 3 579) was conducted. Of the 113 victims recovered from the crash, 62 (54.9 %) presented unusual lesions (n = 560) with a median number of 7 (IQR 3 ∼ 13) and a maximum of 27 per corpse. The majority of lesions were elliptic (58 %) and had an area smaller than 10 cm 2 (82.1 %). Some lesions (6.8 %) also showed clear tooth notches on their edges. These findings identified most of the lesions as consistent with postmortem bite marks from cookiecutter sharks (Isistius spp.). It suggests that cookiecutter sharks were important agents in the degradation of the corpses and thus introduced potential cognitive bias in the research of the cause and manner of death. A novel set of evidence-based identification criteria for cookiecutter bite marks on human bodies is developed to facilitate more accurate medicolegal diagnosis of cookiecutter bites.

  18. Assessment of delta ferrite in multipass TIG welds of 40 mm thick SS 316L plates: a comparative study of ferrite number (FN) prediction and experimental measurements

    International Nuclear Information System (INIS)

    Buddu, Ramesh Kumar; Shaikh, Shamsuddin; Raole, Prakash M.; Sarkar, Biswanath

    2015-01-01

    Austenitic stainless steels are widely used in the fabrication of fusion reactor major systems like vacuum vessel, divertor, cryostat and other major structural components development. AISI SS316L materials of different thicknesses are utilized due to the superior mechanical properties, corrosion resistance, fatigue and stability at high temperature operation. The components are developed by using welding techniques like TIG welding with suitable filler material. Like in case of vacuum vessel, the multipass welding is unavoidable due to the use of high thickness plates (like in case of ITER and DEMO reactors). In general austenitic welds contains fraction of delta ferrite phase in multipass welds. The quantification depends on the weld thermal cycles like heat input and cooling rates associated with process conditions and chemical composition of the welds. Due to the repeated weld thermal passes, the microstructure adversely alters due to the presence of complex phases like austenite, ferrite and delta ferrite and subsequently influence the mechanical properties like tensile and impact toughness of joints. Control of the delta ferrite is necessary to hold the compatible final properties of the joints and hence its evaluation vital before the fabrication process. The present paper reports the detail analysis of delta ferrite phase in welded region and heat affected zones of 40 mm thick SS316L plates welded by special design multipass narrow groove TIG welding process under three different heat input conditions (1.67 kJ/mm, 1.78 kJ/mm, 1.87 kJ/mm). The correlation of delta ferrite microstructure with optical microscope and high resolution SEM has been carried out and different type of acicular and vermicular delta ferrite structures is observed. This is further correlated with the non destructive magnetic measurement using Ferrite scope. The measured ferrite number (FN) is correlated with the formed delta ferrite phase. The chemical composition of weld samples is

  19. Analysis of the Latitudinal Variability of Tropospheric Ozone in the Arctic Using the Large Number of Aircraft and Ozonesonde Observations in Early Summer 2008

    Science.gov (United States)

    Ancellet, Gerard; Daskalakis, Nikos; Raut, Jean Christophe; Quennehen, Boris; Ravetta, Francois; Hair, Jonathan; Tarasick, David; Schlager, Hans; Weinheimer, Andrew J.; Thompson, Anne M.; hide

    2016-01-01

    The goal of the paper are to: (1) present tropospheric ozone (O3) climatologies in summer 2008 based on a large amount of measurements, during the International Polar Year when the Polar Study using Aircraft, Remote Sensing, Surface Measurements, and Models of Climate Chemistry, Aerosols, and Transport (POLARCAT) campaigns were conducted (2) investigate the processes that determine O3 concentrations in two different regions (Canada and Greenland) that were thoroughly studied using measurements from 3 aircraft and 7 ozonesonde stations. This paper provides an integrated analysis of these observations and the discussion of the latitudinal and vertical variability of tropospheric ozone north of 55oN during this period is performed using a regional model (WFR-Chem). Ozone, CO and potential vorticity (PV) distributions are extracted from the simulation at the measurement locations. The model is able to reproduce the O3 latitudinal and vertical variability but a negative O3 bias of 6-15 ppbv is found in the free troposphere over 4 km, especially over Canada. Ozone average concentrations are of the order of 65 ppbv at altitudes above 4 km both over Canada and Greenland, while they are less than 50 ppbv in the lower troposphere. The relative influence of stratosphere-troposphere exchange (STE) and of ozone production related to the local biomass burning (BB) emissions is discussed using differences between average values of O3, CO and PV for Southern and Northern Canada or Greenland and two vertical ranges in the troposphere: 0-4 km and 4-8 km. For Canada, the model CO distribution and the weak correlation (less than 30%) of O3 and PV suggests that stratosphere troposphere exchange (STE) is not the major contribution to average tropospheric ozone at latitudes less than 70 deg N, due to the fact that local biomass burning (BB) emissions were significant during the 2008 summer period. Conversely over Greenland, significant STE is found according to the better O3 versus PV

  20. Evaluation of PCR procedures for detecting and quantifying Leishmania donovani DNA in large numbers of dried human blood samples from a visceral leishmaniasis focus in Northern Ethiopia.

    Science.gov (United States)

    Abbasi, Ibrahim; Aramin, Samar; Hailu, Asrat; Shiferaw, Welelta; Kassahun, Aysheshm; Belay, Shewaye; Jaffe, Charles; Warburg, Alon

    2013-03-27

    Visceral Leishmaniasis (VL) is a disseminated protozoan infection caused by Leishmania donovani parasites which affects almost half a million persons annually. Most of these are from the Indian sub-continent, East Africa and Brazil. Our study was designed to elucidate the role of symptomatic and asymptomatic Leishmania donovani infected persons in the epidemiology of VL in Northern Ethiopia. The efficacy of quantitative real-time kinetoplast DNA/PCR (qRT-kDNA PCR) for detecting Leishmania donovani in dried-blood samples was assessed in volunteers living in an endemic focus. Of 4,757 samples, 680 (14.3%) were found positive for Leishmania k-DNA but most of those (69%) had less than 10 parasites/ml of blood. Samples were re-tested using identical protocols and only 59.3% of the samples with 10 parasite/ml or less were qRT-kDNA PCR positive the second time. Furthermore, 10.8% of the PCR negative samples were positive in the second test. Most samples with higher parasitemias remained positive upon re-examination (55/59 =93%). We also compared three different methods for DNA preparation. Phenol-chloroform was more efficient than sodium hydroxide or potassium acetate. DNA sequencing of ITS1 PCR products showed that 20/22 samples were Leishmania donovani while two had ITS1 sequences homologous to Leishmania major. Although qRT-kDNA PCR is a highly sensitive test, the dependability of low positives remains questionable. It is crucial to correlate between PCR parasitemia and infectivity to sand flies. While optimal sensitivity is achieved by targeting k-DNA, it is important to validate the causative species of VL by DNA sequencing.

  1. Efficacy, Reliability, and Safety of Completely Autologous Fibrin Glue in Neurosurgical Procedures: Single-Center Retrospective Large-Number Case Study.

    Science.gov (United States)

    Nakayama, Noriyuki; Yano, Hirohito; Egashira, Yusuke; Enomoto, Yukiko; Ohe, Naoyuki; Kanemura, Nobuhiro; Kitagawa, Junichi; Iwama, Toru

    2018-01-01

    Commercially available fibrin glue (Com-FG), which is used commonly worldwide, is produced with pooled human plasma from multiple donors. However, it has added bovine aprotinin, which involves the risk of infection, allogenic immunity, and allergic reactions. We evaluate the efficacy, reliability, and safety of completely autologous fibrin glue (CAFG). From August 2014 to February 2016, prospective data were collected and analyzed from 153 patients. CAFG was prepared with the CryoSeal System using autologous blood and was applied during neurosurgical procedures. Using CAFG-soaked oxidized regenerated cellulose and/or polyglycolic acid sheets, we performed a pinpoint hemostasis, transposed the offending vessels in a microvascular decompression, and covered the dural incision to prevent cerebrospinal fluid leakage. The CryoSeal System had generated up to a mean of 4.51 mL (range, 3.0-8.4 mL) of CAFG from 400 mL autologous blood. Com-FG products were not used in our procedures. Only 6 patients required an additional allogeneic blood transfusion. The hemostatic effective rate was 96.1% (147 of 153 patients). Only 1 patient who received transsphenoidal surgery for a pituitary adenoma presented with the complication of delayed postoperative cerebrospinal fluid leakage (0.65%). No patient developed allergic reactions or systemic complications associated with the use of CAFG. CAFG effectively provides hemostatic, adhesive, and safety performance. The timing and three-dimensional shape of CAFG-soaked oxidized regenerated cellulose and/or polyglycolic acid sheets solidification can be controlled with slow fibrin formation. The cost to prepare CAFG is similar compared with Com-FG products, and it can therefore be easily used at most institutions. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Eating in the absence of hunger in adolescents: intake after a large-array meal compared with that after a standardized meal.

    Science.gov (United States)

    Shomaker, Lauren B; Tanofsky-Kraff, Marian; Zocca, Jaclyn M; Courville, Amber; Kozlosky, Merel; Columbo, Kelli M; Wolkoff, Laura E; Brady, Sheila M; Crocker, Melissa K; Ali, Asem H; Yanovski, Susan Z; Yanovski, Jack A

    2010-10-01

    Eating in the absence of hunger (EAH) is typically assessed by measuring youths' intake of palatable snack foods after a standard meal designed to reduce hunger. Because energy intake required to reach satiety varies among individuals, a standard meal may not ensure the absence of hunger among participants of all weight strata. The objective of this study was to compare adolescents' EAH observed after access to a very large food array with EAH observed after a standardized meal. Seventy-eight adolescents participated in a randomized crossover study during which EAH was measured as intake of palatable snacks after ad libitum access to a very large array of lunch-type foods (>10,000 kcal) and after a lunch meal standardized to provide 50% of the daily estimated energy requirements. The adolescents consumed more energy and reported less hunger after the large-array meal than after the standardized meal (P values kcal less EAH after the large-array meal than after the standardized meal (295 ± 18 compared with 365 ± 20 kcal; P < 0.001), but EAH intakes after the large-array meal and after the standardized meal were positively correlated (P values < 0.001). The body mass index z score and overweight were positively associated with EAH in both paradigms after age, sex, race, pubertal stage, and meal intake were controlled for (P values ≤ 0.05). EAH is observable and positively related to body weight regardless of whether youth eat in the absence of hunger from a very large-array meal or from a standardized meal. This trial was registered at clinicaltrials.gov as NCT00631644.

  3. Design and Fabrication of 3D printed Scaffolds with a Mechanical Strength Comparable to Cortical Bone to Repair Large Bone Defects

    Science.gov (United States)

    Roohani-Esfahani, Seyed-Iman; Newman, Peter; Zreiqat, Hala

    2016-01-01

    A challenge in regenerating large bone defects under load is to create scaffolds with large and interconnected pores while providing a compressive strength comparable to cortical bone (100-150 MPa). Here we design a novel hexagonal architecture for a glass-ceramic scaffold to fabricate an anisotropic, highly porous three dimensional scaffolds with a compressive strength of 110 MPa. Scaffolds with hexagonal design demonstrated a high fatigue resistance (1,000,000 cycles at 1-10 MPa compressive cyclic load), failure reliability and flexural strength (30 MPa) compared with those for conventional architecture. The obtained strength is 150 times greater than values reported for polymeric and composite scaffolds and 5 times greater than reported values for ceramic and glass scaffolds at similar porosity. These scaffolds open avenues for treatment of load bearing bone defects in orthopaedic, dental and maxillofacial applications.

  4. Cycle killer... qu'est-ce que c'est? On the comparative approximability of hybridization number and directed feedback vertex set

    NARCIS (Netherlands)

    Kelk, S.M.; Iersel, van L.J.J.; Lekic, N.; Linz, S.; Scornavacca, C.; Stougie, L.

    2011-01-01

    We show that the problem of computing the hybridization number of two rooted binary phylogenetic trees on the same set of taxa X has a constant factor polynomial-time approximation if and only if the problem of computing a minimum-size feedback vertex set in a directed graph (DFVS) has a constant

  5. Screening for copy-number alterations and loss of heterozygosity in chronic lymphocytic leukemia--a comparative study of four differently designed, high resolution microarray platforms

    DEFF Research Database (Denmark)

    Gunnarsson, R.; Staaf, J.; Jansson, M.

    2008-01-01

    Screening for gene copy-number alterations (CNAs) has improved by applying genome-wide microarrays, where SNP arrays also allow analysis of loss of heterozygozity (LOH). We here analyzed 10 chronic lymphocytic leukemia (CLL) samples using four different high-resolution platforms: BAC arrays (32K)...

  6. Comparative Analysis of the Number and Structure of the Complexes of Microscopic Fungi in Tundra and Taiga Soils in the North of the Kola Peninsula

    Science.gov (United States)

    Korneikova, M. V.

    2018-01-01

    The number, biomass, length of fungal mycelium, and species diversity of microscopic fungi have been studied in soils of the tundra and taiga zones in the northern part of the Kola Peninsula: Al-Fe-humus podzols (Albic Podzols), podburs (Entic Podzols), dry peaty soils (Folic Histosols), low-moor peat soils (Sapric Histosols), and soils of frost bare spots (Cryosols). The number of cultivated microscopic fungi in tundra soils varied from 8 to 328 thousand CFU/g, their biomass averaged 1.81 ± 0.19 mg/g, and the length of fungal mycelium averaged 245 ± 25 m/g. The number of micromycetes in taiga soils varied from 80 to 350 thousand CFU/g, the number of fungal propagules in some years reached 600 thousand CFU/g; the fungal biomass varied from 0.23 to 6.2 mg/g, and the length of fungal mycelium varied from 32 to 3900 m/g. Overall, 36 species of fungi belonging to 16 genera, 13 families, and 8 orders were isolated from tundra soils. The species diversity of microscopic fungi in taiga soils was significantly higher: 87 species belonging to 31 genera, 21 families, and 11 orders. Fungi from the Penicillium genus predominated in both natural zones and constituted 38-50% of the total number of isolated species. The soils of tundra and taiga zones were characterized by their own complexes of micromycetes; the similarity of their species composition was about 40%. In soils of the tundra zone, Mortierella longicollis, Penicillium melinii, P. raistrickii, and P. simplicissimum predominated; dominant fungal species in soils of the taiga zone were represented by M. longicollis, P. decumbens, P. implicatum, and Umbelopsis isabellina.

  7. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  8. Gaming the Law of Large Numbers

    Science.gov (United States)

    Hoffman, Thomas R.; Snapp, Bart

    2012-01-01

    Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…

  9. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  10. Statin eligibility and cardiovascular risk burden assessed by coronary artery calcium score: comparing the two guidelines in a large Korean cohort.

    Science.gov (United States)

    Rhee, Eun-Jung; Park, Se Eun; Oh, Hyung Geun; Park, Cheol-Young; Oh, Ki-Won; Park, Sung-Woo; Blankstein, Ron; Plutzky, Jorge; Lee, Won-Young

    2015-05-01

    To investigate the statin eligibility and the predictabilities for cardiovascular disease between AHA/ACC and ATPIII guidelines, comparing those results to concomitant coronary artery calcium scores (CACS) in a large cohort of Korean individuals who met statin-eligibility criteria. Among 19,920 participants in a health screening program, eligibility for statin treatment was assessed by the two guidelines. The presence and extent of coronary artery calcification (CAC) was measured by multi-detector computed tomography and compared among the various groups defined by the two guidelines. Applying the new ACC/AHA guideline to the health screening cohort increased the statin-eligible population from 18.7% (as defined by ATP III) to 21.7%. Statin-eligible subjects as defined only by ACC/AHA guideline manifested a higher proportion of subjects with CAC compared with those meeting only ATP-III criteria even after adjustment for age and sex (47.1 vs. 33.8%, pguideline showed higher odds ratio for the presence of CACS>0 compared with those meeting ATP-III criteria {3.493 (3.245∼3.759) vs. 2.865 (2.653∼3.094)}, which was attenuated after adjusted for age and sex. In this large Korean cohort, more subjects would have qualified for statin initiation under the new ACC/AHA guideline as compared with the proportion recommended for statin treatment by ATP III guideline. Among statin-eligible Korean health screening subjects, the new ACC/AHA guideline identified a greater extent of atherosclerosis as assessed by CACS as compared to ATP III guideline assessment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Hupa Numbers.

    Science.gov (United States)

    Bennett, Ruth, Ed.; And Others

    An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…

  12. Triangular Numbers

    Indian Academy of Sciences (India)

    Admin

    Triangular number, figurate num- ber, rangoli, Brahmagupta–Pell equation, Jacobi triple product identity. Figure 1. The first four triangular numbers. Left: Anuradha S Garge completed her PhD from. Pune University in 2008 under the supervision of Prof. S A Katre. Her research interests include K-theory and number theory.

  13. Proth Numbers

    Directory of Open Access Journals (Sweden)

    Schwarzweller Christoph

    2015-02-01

    Full Text Available In this article we introduce Proth numbers and prove two theorems on such numbers being prime [3]. We also give revised versions of Pocklington’s theorem and of the Legendre symbol. Finally, we prove Pepin’s theorem and that the fifth Fermat number is not prime.

  14. A comparative study on total reflection X-ray fluorescence determination of low atomic number elements in air, helium and vacuum atmospheres using different excitation sources

    Science.gov (United States)

    Misra, N. L.; Kanrar, Buddhadev; Aggarwal, S. K.; Wobrauschek, Peter; Rauwolf, M.; Streli, Christina

    2014-09-01

    A comparison of trace element determinations of low atomic number (Z) elements Na, Mg, Al, P, K and Ca in air, helium and vacuum atmospheres using W Lβ1, Mo Kα and Cr Kα excitations has been made. For Mo Kα and W Lβ1 excitations a Si (Li) detector with beryllium window was used and measurements were performed in air and helium atmospheres. For Cr Kα excitation, a Si (Li) detector with an ultra thin polymer window (UTW) was used and measurements were made in vacuum and air atmospheres. The sensitivities of the elemental X-ray lines were determined using TXRF spectra of standard solutions and processing them by IAEA QXAS program. The elemental concentrations of the elements in other solutions were determined using their TXRF spectra and pre-determined sensitivity values. The study suggests that, using the above experimental set up, Mo Kα excitation is not suited for trace determination of low atomic number element. Excitation by WLβ1 and helium atmosphere, the spectrometer can be used for the determination of elements with Z = 15 (P) and above with fairly good detection limits whereas Cr Kα excitation with ultra thin polymer window and vacuum atmosphere is good for the elements having Z = 11 (Na) and above. The detection limits using this set up vary from 7048 pg for Na to 83 pg for Ti.

  15. Sagan numbers

    OpenAIRE

    Mendonça, J. Ricardo G.

    2012-01-01

    We define a new class of numbers based on the first occurrence of certain patterns of zeros and ones in the expansion of irracional numbers in a given basis and call them Sagan numbers, since they were first mentioned, in a special case, by the North-american astronomer Carl E. Sagan in his science-fiction novel "Contact." Sagan numbers hold connections with a wealth of mathematical ideas. We describe some properties of the newly defined numbers and indicate directions for further amusement.

  16. Comparative effects of split Orimulsion reg-sign, fuel oil number-sign 6 and Lago Medio Crude oil over functional variables of an estuarine community

    International Nuclear Information System (INIS)

    Quilici, A.; Rodriguez-Grau, J.; Vasquez, P.; Infante, C.; Schiazza, J. La; Briceno, H.; Pereira, N.

    1995-01-01

    This study evaluates the effects, on comparative bases, of three different kinds of hydrocarbon products, on physiological and dynamic parameters of a mangrove community (located East of Venezuela), by experimentally monitoring changes on those parameters using Laguncularia racemosa as a bioindicator mangrove species. Orimulsion reg-sign has been introduced recently in the World market; and it is, in essence, an emulsion produced from bitumen mixed with water and stabilized with a surfactant. Results indicate that litter disappearance rate, a parameter directly related to nutrient cycling and dynamics, is not altered by the split Orimulsion reg-sign, when compared to controls; while the other hydrocarbon products tested, seem to exert an impact over this parameter. In addition, Orimulsion reg-sign, is less retained in mangrove sediments compared with the other two products, as indicated by the remaining percent mean of total hydrocarbon soil content sampled on the same consecutive dates. Nonetheless, the photosynthetic capacity of leaves as well as soil respiratory rates, monitored at treatment and control plots, were not affected by any of the products tested. In conclusion, L. racemosa, in terms of its photosynthetic capacity, is not directly affected by any of the hydrocarbon products tested, nor does there seem to be an effect on soil respiration; however, Orimulsion reg-sign, in contrast to the other products, does not seem to be causing an impact with regard to litter dynamics

  17. Eulerian numbers

    CERN Document Server

    Petersen, T Kyle

    2015-01-01

    This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gamma-nonnegativity and real-rootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group. The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions. The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, ...

  18. Comparative Analyses between Skeletal Muscle miRNAomes from Large White and Min Pigs Revealed MicroRNAs Associated with Postnatal Muscle Hypertrophy.

    Science.gov (United States)

    Sheng, Xihui; Wang, Ligang; Ni, Hemin; Wang, Lixian; Qi, Xiaolong; Xing, Shuhan; Guo, Yong

    2016-01-01

    The molecular mechanism regulated by microRNAs (miRNAs) that underlies postnatal hypertrophy of skeletal muscle is complex and remains unclear. Here, the miRNAomes of longissimus dorsi muscle collected at five postnatal stages (60, 120, 150, 180, and 210 days after birth) from Large White (commercial breed) and Min pigs (indigenous breed of China) were analyzed by Illumina sequencing. We identified 734 miRNAs comprising 308 annotated miRNAs and 426 novel miRNAs, of which 307 could be considered pig-specific. Comparative analysis between two breeds suggested that 60 and 120 days after birth were important stages for skeletal muscle hypertrophy and intramuscular fat accumulation. A total of 263 miRNAs were significantly differentially expressed between two breeds at one or more developmental stages. In addition, the differentially expressed miRNAs between every two adjacent developmental stages in each breed were determined. Notably, ssc-miR-204 was significantly more highly expressed in Min pig skeletal muscle at all postnatal stages compared with its expression in Large White pig skeletal muscle. Based on gene ontology and KEGG pathway analyses of its predicted target genes, we concluded that ssc-miR-204 may exert an impact on postnatal hypertrophy of skeletal muscle by regulating myoblast proliferation. The results of this study will help in elucidating the mechanism underlying postnatal hypertrophy of skeletal muscle modulated by miRNAs, which could provide valuable information for improvement of pork quality and human myopathy.

  19. Large scale comparative codon-pair context analysis unveils general rules that fine-tune evolution of mRNA primary structure.

    Directory of Open Access Journals (Sweden)

    Gabriela Moura

    Full Text Available BACKGROUND: Codon usage and codon-pair context are important gene primary structure features that influence mRNA decoding fidelity. In order to identify general rules that shape codon-pair context and minimize mRNA decoding error, we have carried out a large scale comparative codon-pair context analysis of 119 fully sequenced genomes. METHODOLOGIES/PRINCIPAL FINDINGS: We have developed mathematical and software tools for large scale comparative codon-pair context analysis. These methodologies unveiled general and species specific codon-pair context rules that govern evolution of mRNAs in the 3 domains of life. We show that evolution of bacterial and archeal mRNA primary structure is mainly dependent on constraints imposed by the translational machinery, while in eukaryotes DNA methylation and tri-nucleotide repeats impose strong biases on codon-pair context. CONCLUSIONS: The data highlight fundamental differences between prokaryotic and eukaryotic mRNA decoding rules, which are partially independent of codon usage.

  20. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.

    Science.gov (United States)

    Chen-Ying Hung; Wei-Chen Chen; Po-Tsun Lai; Ching-Heng Lin; Chi-Chun Lee

    2017-07-01

    Electronic medical claims (EMCs) can be used to accurately predict the occurrence of a variety of diseases, which can contribute to precise medical interventions. While there is a growing interest in the application of machine learning (ML) techniques to address clinical problems, the use of deep-learning in healthcare have just gained attention recently. Deep learning, such as deep neural network (DNN), has achieved impressive results in the areas of speech recognition, computer vision, and natural language processing in recent years. However, deep learning is often difficult to comprehend due to the complexities in its framework. Furthermore, this method has not yet been demonstrated to achieve a better performance comparing to other conventional ML algorithms in disease prediction tasks using EMCs. In this study, we utilize a large population-based EMC database of around 800,000 patients to compare DNN with three other ML approaches for predicting 5-year stroke occurrence. The result shows that DNN and gradient boosting decision tree (GBDT) can result in similarly high prediction accuracies that are better compared to logistic regression (LR) and support vector machine (SVM) approaches. Meanwhile, DNN achieves optimal results by using lesser amounts of patient data when comparing to GBDT method.

  1. Transfinite Numbers

    Indian Academy of Sciences (India)

    Transfinite Numbers. What is Infinity? S M Srivastava. In a series of revolutionary articles written during the last quarter of the nineteenth century, the great Ger- man mathematician Georg Cantor removed the age-old mistrust of infinity and created an exceptionally beau- tiful and useful theory of transfinite numbers. This is.

  2. Room for wind. An investigation into the possibilities for the erection of large numbers of wind turbines. Ruimte voor wind. Een studie naar de plaatsingsmogelijkheden van grote aantallen windturbines

    Energy Technology Data Exchange (ETDEWEB)

    Arkesteijn, L; Van Huis, G; Reckman, E

    1987-01-01

    The Dutch government aims to realize a wind power capacity in The Netherlands of 1000 MW in the year 2000. Environmental impacts of the erection of a large number of 200 kW and 1 MW wind turbines are studied. Four siting models have been developed in which attention is paid to environmental and economic aspects, the possibilities to introduce the electric power into the national power grid and the availability and reliability of enough wind. Noise pollution and danger for birds are to be avoided. The choice between the construction of wind parks where a number of wind turbines is concentrated in a small area or a more dispersed construction is somewhat difficult if all relevant factors are to be taken into consideration. Without government's interference the target of 1000 MW in the year 2000 will probably not be attained. It is therefore desirable to practise an active energy policy in favor of wind energy, for which many ways are possible.

  3. Comparing the life cycle costs of using harvest residue as feedstock for small- and large-scale bioenergy systems (part II)

    International Nuclear Information System (INIS)

    Cleary, Julian; Wolf, Derek P.; Caspersen, John P.

    2015-01-01

    In part II of our two-part study, we estimate the nominal electricity generation and GHG (greenhouse gas) mitigation costs of using harvest residue from a hardwood forest in Ontario, Canada to fuel (1) a small-scale (250 kW e ) combined heat and power wood chip gasification unit and (2) a large-scale (211 MW e ) coal-fired generating station retrofitted to combust wood pellets. Under favorable operational and regulatory conditions, generation costs are similar: 14.1 and 14.9 cents per kWh (c/kWh) for the small- and large-scale facilities, respectively. However, GHG mitigation costs are considerably higher for the large-scale system: $159/tonne of CO 2 eq., compared to $111 for the small-scale counterpart. Generation costs increase substantially under existing conditions, reaching: (1) 25.5 c/kWh for the small-scale system, due to a regulation mandating the continual presence of an operating engineer; and (2) 22.5 c/kWh for the large-scale system due to insufficient biomass supply, which reduces plant capacity factor from 34% to 8%. Limited inflation adjustment (50%) of feed-in tariff rates boosts these costs by 7% to 11%. Results indicate that policy generalizations based on scale require careful consideration of the range of operational/regulatory conditions in the jurisdiction of interest. Further, if GHG mitigation is prioritized, small-scale systems may be more cost-effective. - Highlights: • Generation costs for two forest bioenergy systems of different scales are estimated. • Nominal electricity costs are 14.1–28.3 cents/kWh for the small-scale plant. • Nominal electricity costs are 14.9–24.2 cents/kWh for the large-scale plant. • GHG mitigation costs from displacing coal and LPG are $111-$281/tonne of CO 2 eq. • High sensitivity to cap. factor (large-scale) and labor requirements (small-scale)

  4. Single-port (OctoPort) assisted extracorporeal ovarian cystectomy for the treatment of large ovarian cysts: compare to conventional laparoscopy and laparotomy.

    Science.gov (United States)

    Chong, Gun Oh; Hong, Dae Gy; Lee, Yoon Soon

    2015-01-01

    To evaluate single-port assisted extracorporeal cystectomy for treatment of large ovarian cysts and to compare its surgical outcomes, complications, and cystic content spillage rates with those of conventional laparoscopy and laparotomy. Retrospective study (Canadian Task Force classification II-2). University teaching hospital. Twenty-five patients who underwent single-port assisted extracorporeal cystectomy (group 1), 33 patients who underwent conventional laparoscopy (group 2), and 25 patients who underwent laparotomy (group 3). Surgical outcomes, complications, and spillage rates in group 1 were compared with those in groups 2 and 3. Patients characteristics and tumor histologic findings were similar in the 3 groups. The mean (SD) largest diameter of ovarian cysts was 11.4 (4.2) cm in group 1, 9.7 (2.3) cm in group 2, and 12.0 (3.4) cm in group 3. Operative time in groups 1 and 2 was similar at 69.3 (26.3) minutes vs 73.1 (36.3) minutes (p = .66); however, operative time in group 1 was shorter than in group 3, at 69.3 (26.3) minutes vs 87.5 (26.6) minutes (p =.02). Blood loss in group 1 was significantly lower than in groups 2 and 3, at 16.0 (19.4) mL vs 36.1 (20.7) mL (p < .001) and 16.0 (19.4) mL vs 42.2 (39.7) mL (p = .005). The spillage rate in group 1 was profoundly lower than in group 2, at 8.0% vs 69.7% (p < .001). Single-port assisted extracorporeal cystectomy offers an alternative to conventional laparoscopy and laparotomy for management of large ovarian cysts, with comparable surgical outcomes. Furthermore, cyst content spillage rate in single-port assisted extracorporeal cystectomy was remarkably lower than that in conventional laparoscopy. Copyright © 2015. Published by Elsevier Inc.

  5. Chocolate Numbers

    OpenAIRE

    Ji, Caleb; Khovanova, Tanya; Park, Robin; Song, Angela

    2015-01-01

    In this paper, we consider a game played on a rectangular $m \\times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain. This paper enumerates the number of ways to break an $m \\times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we p...

  6. Number theory

    CERN Document Server

    Andrews, George E

    1994-01-01

    Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl

  7. Nice numbers

    CERN Document Server

    Barnes, John

    2016-01-01

    In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...

  8. Comparing effects of land reclamation techniques on water pollution and fishery loss for a large-scale offshore airport island in Jinzhou Bay, Bohai Sea, China.

    Science.gov (United States)

    Yan, Hua-Kun; Wang, Nuo; Yu, Tiao-Lan; Fu, Qiang; Liang, Chen

    2013-06-15

    Plans are being made to construct Dalian Offshore Airport in Jinzhou Bay with a reclamation area of 21 km(2). The large-scale reclamation can be expected to have negative effects on the marine environment, and these effects vary depending on the reclamation techniques used. Water quality mathematical models were developed and biology resource investigations were conducted to compare effects of an underwater explosion sediment removal and rock dumping technique and a silt dredging and rock dumping technique on water pollution and fishery loss. The findings show that creation of the artificial island with the underwater explosion sediment removal technique would greatly impact the marine environment. However, the impact for the silt dredging technique would be less. The conclusions from this study provide an important foundation for the planning of Dalian Offshore Airport and can be used as a reference for similar coastal reclamation and marine environment protection. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  9. Practice and effectiveness of web-based problem-based learning approach in a large class-size system: A comparative study.

    Science.gov (United States)

    Ding, Yongxia; Zhang, Peili

    2018-06-12

    Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.

  10. Retrospective comparative ten-year study of cumulative survival rates of remaining teeth in large edentulism treated with implant-supported fixed partial dentures or removable partial dentures.

    Science.gov (United States)

    Yamazaki, Seiya; Arakawa, Hikaru; Maekawa, Kenji; Hara, Emilio Satoshi; Noda, Kinji; Minakuchi, Hajime; Sonoyama, Wataru; Matsuka, Yoshizo; Kuboki, Takuo

    2013-07-01

    This study aimed to compare the survival rates of remaining teeth between implant-supported fixed dentures (IFDs) and removable partial dentures (RPDs) in patients with large edentulous cases. The second goal was to assess the risk factors for remaining tooth loss. The study subjects were selected among those who received prosthodontic treatment at Okayama University Dental Hospital for their edentulous space exceeding at least four continuous missing teeth. Twenty-one patients were included in the IFD group and 82 patients were included in the RPD group. Survival rates of remaining teeth were calculated in three subcategories: (1) whole remaining teeth, (2) adjacent teeth to intended edentulous space, and (3) opposing teeth to intended edentulous space. The ten-year cumulative survival rate of the whole remaining teeth was significantly higher in the IFD group (40.0%) than in the RPD group (24.4%). On the other hand, there was no significant difference between two groups in the survival rate of teeth adjacent or opposing to intended edentulous space. A Cox proportional hazard analysis revealed that RPD restoration and gender (male) were the significant risk factors for remaining tooth loss (whole remaining teeth). These results suggest that IFD treatment can reduce the incidence of remaining tooth loss in large edentulous cases. Copyright © 2013 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  11. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  12. Randomized Trial Comparing R-CHOP Versus High-Dose Sequential Chemotherapy in High-Risk Patients With Diffuse Large B-Cell Lymphomas.

    Science.gov (United States)

    Cortelazzo, Sergio; Tarella, Corrado; Gianni, Alessandro Massimo; Ladetto, Marco; Barbui, Anna Maria; Rossi, Andrea; Gritti, Giuseppe; Corradini, Paolo; Di Nicola, Massimo; Patti, Caterina; Mulé, Antonino; Zanni, Manuela; Zoli, Valerio; Billio, Atto; Piccin, Andrea; Negri, Giovanni; Castellino, Claudia; Di Raimondo, Francesco; Ferreri, Andrés J M; Benedetti, Fabio; La Nasa, Giorgio; Gini, Guido; Trentin, Livio; Frezzato, Maurizio; Flenghi, Leonardo; Falorio, Simona; Chilosi, Marco; Bruna, Riccardo; Tabanelli, Valentina; Pileri, Stefano; Masciulli, Arianna; Delaini, Federica; Boschini, Cristina; Rambaldi, Alessandro

    2016-11-20

    Purpose The benefit of high-dose chemotherapy with autologous stem-cell transplantation (ASCT) as first-line treatment in patients with diffuse large B-cell lymphomas is still a matter of debate. To address this point, we designed a randomized phase III trial to compare rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP)-14 (eight cycles) with rituximab plus high-dose sequential chemotherapy (R-HDS) with ASCT. Patients and Methods From June 2005 to June 2011, 246 high-risk patients with a high-intermediate (56%) or high (44%) International Prognostic Index score were randomly assigned to the R-CHOP or R-HDS arm, and 235 were analyzed by intent to treat. The primary efficacy end point of the study was 3-year event-free survival, and results were analyzed on an intent-to-treat basis. Results Clinical response (complete response, 78% v 76%; partial response, 5% v 9%) and failures (no response, 15% v 11%; and early treatment-related mortality, 2% v 3%) were similar after R-CHOP versus R-HDS, respectively. After a median follow-up of 5 years, the 3-year event-free survival was 62% versus 65% ( P = .83). At 3 years, compared with the R-CHOP arm, the R-HDS arm had better disease-free survival (79% v 91%, respectively; P = .034), but this subsequently vanished because of late-occurring treatment-related deaths. No difference was detected in terms of progression-free survival (65% v 75%, respectively; P = .12), or overall survival (74% v 77%, respectively; P = .64). Significantly higher hematologic toxicity ( P < .001) and more infectious complications ( P < .001) were observed in the R-HDS arm. Conclusion In this study, front-line intensive R-HDS chemotherapy with ASCT did not improve the outcome of high-risk patients with diffuse large B-cell lymphomas.

  13. Primary health care utilization by immigrants as compared to the native population: a multilevel analysis of a large clinical database in Catalonia.

    Science.gov (United States)

    Muñoz, Miguel-Angel; Pastor, Esther; Pujol, Joan; Del Val, José Luis; Cordomí, Silvia; Hermosilla, Eduardo

    2012-06-01

    Immigration is a relevant public health issue and there is a great deal of controversy surrounding its impact on health services utilization. To determine differences between immigrants and non-immigrants in the utilization of primary health care services in Catalonia, Spain. Population based, cross-sectional, multicentre study. We used the information from 16 primary health care centres in an area near Barcelona, Spain. We conducted a multilevel analysis for the year 2008 to compare primary health care services utilization between all immigrants aged 15 or more and a sample of non-immigrants, paired by age and sex. Overall, immigrants living in Spain used health services more than non-immigrants (Incidence Risk Ratio (IRR) 1.16 (95% Confidence Interval (CI): 1.15-1.16) and (IRR 1, 26, 95% CI: 1.25-1.28) for consultations with GPs and referrals to specialized care, respectively. People coming from the Maghreb and the rest of Africa requested the most consultations involving a GP and nurses (IRR 1.34, 95% CI: 1.33-1.36 and IRR 1.06, 95% CI: 1.03-1.44, respectively). They were more frequently referred to specialized care (IRR 1.44, 95% CI: 1.41-1.46) when compared to Spaniards. Immigrants from Asia had the lowest numbers of consultations with a GP and referrals (IRR 0.76, 95% CI: 0.66-0.88 and IRR 0.76, 95% CI: 0.61-0.95, respectively. On average, immigrants living in Catalonia used the health services more than non-immigrants. Immigrants from the Maghreb and other African countries showed the highest and those from Asia the lowest, number of consultations and referrals to specialized care.

  14. Number names and number understanding

    DEFF Research Database (Denmark)

    Ejersbo, Lisser Rye; Misfeldt, Morten

    2014-01-01

    This paper concerns the results from the first year of a three-year research project involving the relationship between Danish number names and their corresponding digits in the canonical base 10 system. The project aims to develop a system to help the students’ understanding of the base 10 syste...... the Danish number names are more complicated than in other languages. Keywords: A research project in grade 0 and 1th in a Danish school, Base-10 system, two-digit number names, semiotic, cognitive perspectives....

  15. Funny Numbers

    Directory of Open Access Journals (Sweden)

    Theodore M. Porter

    2012-12-01

    Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.

  16. Transcendental numbers

    CERN Document Server

    Murty, M Ram

    2014-01-01

    This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.

  17. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  18. Transfinite Numbers

    Indian Academy of Sciences (India)

    this is a characteristic difference between finite and infinite sets and created an immensely useful branch of mathematics based on this idea which had a great impact on the whole of mathe- matics. For example, the question of what is a number (finite or infinite) is almost a philosophical one. However Cantor's work turned it ...

  19. Annotation of two large contiguous regions from the Haemonchus contortus genome using RNA-seq and comparative analysis with Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Roz Laing

    Full Text Available The genomes of numerous parasitic nematodes are currently being sequenced, but their complexity and size, together with high levels of intra-specific sequence variation and a lack of reference genomes, makes their assembly and annotation a challenging task. Haemonchus contortus is an economically significant parasite of livestock that is widely used for basic research as well as for vaccine development and drug discovery. It is one of many medically and economically important parasites within the strongylid nematode group. This group of parasites has the closest phylogenetic relationship with the model organism Caenorhabditis elegans, making comparative analysis a potentially powerful tool for genome annotation and functional studies. To investigate this hypothesis, we sequenced two contiguous fragments from the H. contortus genome and undertook detailed annotation and comparative analysis with C. elegans. The adult H. contortus transcriptome was sequenced using an Illumina platform and RNA-seq was used to annotate a 409 kb overlapping BAC tiling path relating to the X chromosome and a 181 kb BAC insert relating to chromosome I. In total, 40 genes and 12 putative transposable elements were identified. 97.5% of the annotated genes had detectable homologues in C. elegans of which 60% had putative orthologues, significantly higher than previous analyses based on EST analysis. Gene density appears to be less in H. contortus than in C. elegans, with annotated H. contortus genes being an average of two-to-three times larger than their putative C. elegans orthologues due to a greater intron number and size. Synteny appears high but gene order is generally poorly conserved, although areas of conserved microsynteny are apparent. C. elegans operons appear to be partially conserved in H. contortus. Our findings suggest that a combination of RNA-seq and comparative analysis with C. elegans is a powerful approach for the annotation and analysis of strongylid

  20. p-adic numbers

    OpenAIRE

    Grešak, Rozalija

    2015-01-01

    The field of real numbers is usually constructed using Dedekind cuts. In these thesis we focus on the construction of the field of real numbers using metric completion of rational numbers using Cauchy sequences. In a similar manner we construct the field of p-adic numbers, describe some of their basic and topological properties. We follow by a construction of complex p-adic numbers and we compare them with the ordinary complex numbers. We conclude the thesis by giving a motivation for the int...

  1. Comparing vector-based and Bayesian memory models using large-scale datasets: User-generated hashtag and tag prediction on Twitter and Stack Overflow.

    Science.gov (United States)

    Stanley, Clayton; Byrne, Michael D

    2016-12-01

    The growth of social media and user-created content on online sites provides unique opportunities to study models of human declarative memory. By framing the task of choosing a hashtag for a tweet and tagging a post on Stack Overflow as a declarative memory retrieval problem, 2 cognitively plausible declarative memory models were applied to millions of posts and tweets and evaluated on how accurately they predict a user's chosen tags. An ACT-R based Bayesian model and a random permutation vector-based model were tested on the large data sets. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R's attentional weight term was linked to an entropy-weighting natural language processing method used to attenuate high-frequency words (e.g., articles and prepositions). Word order was not found to be a strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. The results of the large-scale exploration show how the architecture of the 2 memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Hepatitis C virus diversification in Argentina: comparative analysis between the large city of Buenos Aires and the small rural town of O'Brien.

    Science.gov (United States)

    Golemba, Marcelo D; Culasso, Andrés C A; Villamil, Federico G; Bare, Patricia; Gadano, Adrián; Ridruejo, Ezequiel; Martinez, Alfredo; Di Lello, Federico A; Campos, Rodolfo H

    2013-01-01

    The estimated prevalence of HCV infection in Argentina is around 2%. However, higher rates of infection have been described in population studies of small urban and rural communities. The aim of this work was to compare the origin and diversification of HCV-1b in samples from two different epidemiological scenarios: Buenos Aires, a large cosmopolitan city, and O'Brien, a small rural town with a high prevalence of HCV infection. The E1/E2 and NS5B regions of the viral genome from 83 patients infected with HCV-1b were sequenced. Phylogenetic analysis and Bayesian Coalescent methods were used to study the origin and diversification of HCV-1b in both patient populations. Samples from Buenos Aires showed a polyphyletic behavior with a tMRCA around 1887-1900 and a time of spread of infection approximately 60 years ago. In contrast, samples from ÓBrien showed a monophyletic behavior with a tMRCA around 1950-1960 and a time of spread of infection more recent than in Buenos Aires, around 20-30 years ago. Phylogenetic and coalescence analysis revealed a different behavior in the epidemiological histories of Buenos Aires and ÓBrien. HCV infection in Buenos Aires shows a polyphyletic behavior and an exponential growth in two phases, whereas that in O'Brien shows a monophyletic cluster and an exponential growth in one single step with a more recent tMRCA. The polyphyletic origin and the probability of encountering susceptible individuals in a large cosmopolitan city like Buenos Aires are in agreement with a longer period of expansion. In contrast, in less populated areas such as O'Brien, the chances of HCV transmission are strongly restricted. Furthermore, the monophyletic character and the most recent time of emergence suggest that different HCV-1b ancestors (variants) that were in expansion in Buenos Aires had the opportunity to colonize and expand in O'Brien.

  3. INTERNAL LIMITING MEMBRANE PEELING VERSUS INVERTED FLAP TECHNIQUE FOR TREATMENT OF FULL-THICKNESS MACULAR HOLES: A COMPARATIVE STUDY IN A LARGE SERIES OF PATIENTS.

    Science.gov (United States)

    Rizzo, Stanislao; Tartaro, Ruggero; Barca, Francesco; Caporossi, Tomaso; Bacherini, Daniela; Giansanti, Fabrizio

    2017-12-08

    The inverted flap (IF) technique has recently been introduced in macular hole (MH) surgery. The IF technique has shown an increase of the success rate in the case of large MHs and in MHs associated with high myopia. This study reports the anatomical and functional results in a large series of patients affected by MH treated using pars plana vitrectomy and gas tamponade combined with internal limiting membrane (ILM) peeling or IF. This is a retrospective, consecutive, nonrandomized comparative study of patients affected by idiopathic or myopic MH treated using small-gauge pars plana vitrectomy (25- or 23-gauge) between January 2011 and May 2016. The patients were divided into two groups according to the ILM removal technique (complete removal vs. IF). A subgroup analysis was performed according to the MH diameter (MH peeling and 320 patients underwent pars plana vitrectomy and IF. Overall, 84.94% of the patients had complete anatomical success characterized by MH closure after the operation. In particular, among the patients who underwent only ILM peeling the closure rate was 78.75%; among the patients who underwent the IF technique, it was 91.93% (P = 0.001); and among the patients affected by full-thickness MH ≥400 µm, success was achieved in 95.6% of the cases in the IF group and in 78.6% in the ILM peeling group (P = 0.001); among the patients with an axial length ≥26 mm, success was achieved in 88.4% of the cases in the IF group and in 38.9% in the ILM peeling group (P = 0.001). Average preoperative best-corrected visual acuity was 0.77 (SD = 0.32) logarithm of the minimum angle of resolution (20/118 Snellen) in the peeling group and 0.74 (SD = 0.33) logarithm of the minimum angle of resolution (20/110 Snellen) in the IF group (P = 0.31). Mean postoperative best-corrected visual acuity was 0.52 (SD = 0.42) logarithm of the minimum angle of resolution (20/66 Snellen) in the peeling group and 0.43 (SD = 0.31) logarithm of the minimum angle of resolution (20

  4. Genomic profiling using array comparative genomic hybridization define distinct subtypes of diffuse large b-cell lymphoma: a review of the literature

    Directory of Open Access Journals (Sweden)

    Tirado Carlos A

    2012-09-01

    Full Text Available Abstract Diffuse large B-cell lymphoma (DLBCL is the most common type of non-Hodgkin Lymphoma comprising of greater than 30% of adult non-Hodgkin Lymphomas. DLBCL represents a diverse set of lymphomas, defined as diffuse proliferation of large B lymphoid cells. Numerous cytogenetic studies including karyotypes and fluorescent in situ hybridization (FISH, as well as morphological, biological, clinical, microarray and sequencing technologies have attempted to categorize DLBCL into morphological variants, molecular and immunophenotypic subgroups, as well as distinct disease entities. Despite such efforts, most lymphoma remains undistinguishable and falls into DLBCL, not otherwise specified (DLBCL-NOS. The advent of microarray-based studies (chromosome, RNA, gene expression, etc has provided a plethora of high-resolution data that could potentially facilitate the finer classification of DLBCL. This review covers the microarray data currently published for DLBCL. We will focus on these types of data; 1 array based CGH; 2 classical CGH; and 3 gene expression profiling studies. The aims of this review were three-fold: (1 to catalog chromosome loci that are present in at least 20% or more of distinct DLBCL subtypes; a detailed list of gains and losses for different subtypes was generated in a table form to illustrate specific chromosome loci affected in selected subtypes; (2 to determine common and distinct copy number alterations among the different subtypes and based on this information, characteristic and similar chromosome loci for the different subtypes were depicted in two separate chromosome ideograms; and, (3 to list re-classified subtypes and those that remained indistinguishable after review of the microarray data. To the best of our knowledge, this is the first effort to compile and review available literatures on microarray analysis data and their practical utility in classifying DLBCL subtypes. Although conventional cytogenetic methods such

  5. Large-scale identification and comparative analysis of miRNA expression profile in the respiratory tree of the sea cucumber Apostichopus japonicus during aestivation.

    Science.gov (United States)

    Chen, Muyan; Storey, Kenneth B

    2014-02-01

    The sea cucumber Apostichopus japonicus withstands high water temperatures in the summer by suppressing its metabolic rate and entering a state of aestivation. We hypothesized that changes in the expression of miRNAs could provide important post-transcriptional regulation of gene expression during hypometabolism via control over mRNA translation. The present study analyzed profiles of miRNA expression in the sea cucumber respiratory tree using Solexa deep sequencing technology. We identified 279 sea cucumber miRNAs, including 15 novel miRNAs specific to sea cucumber. Animals sampled during deep aestivation (DA; after at least 15 days of continuous torpor) were compared with animals from a non-aestivation (NA) state (animals that had passed through aestivation and returned to an active state). We identified 30 differentially expressed miRNAs ([RPM (reads per million) >10, |FC| (|fold change|)≥1, FDR (false discovery rate)<0.01]) during aestivation, which were validated by two other miRNA profiling methods: miRNA microarray and real-time PCR. Among the most prominent miRNA species, miR-124, miR-124-3p, miR-79, miR-9 and miR-2010 were significantly over-expressed during deep aestivation compared with non-aestivation animals, suggesting that these miRNAs may play important roles in metabolic rate suppression during aestivation. High-throughput sequencing data and microarray data have been submitted to the GEO database with accession number: 16902695. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A comparative study of outlier detection for large-scale traffic data by one-class SVM and kernel density estimation

    Science.gov (United States)

    Ngan, Henry Y. T.; Yung, Nelson H. C.; Yeh, Anthony G. O.

    2015-02-01

    This paper aims at presenting a comparative study of outlier detection (OD) for large-scale traffic data. The traffic data nowadays are massive in scale and collected in every second throughout any modern city. In this research, the traffic flow dynamic is collected from one of the busiest 4-armed junction in Hong Kong in a 31-day sampling period (with 764,027 vehicles in total). The traffic flow dynamic is expressed in a high dimension spatial-temporal (ST) signal format (i.e. 80 cycles) which has a high degree of similarities among the same signal and across different signals in one direction. A total of 19 traffic directions are identified in this junction and lots of ST signals are collected in the 31-day period (i.e. 874 signals). In order to reduce its dimension, the ST signals are firstly undergone a principal component analysis (PCA) to represent as (x,y)-coordinates. Then, these PCA (x,y)-coordinates are assumed to be conformed as Gaussian distributed. With this assumption, the data points are further to be evaluated by (a) a correlation study with three variant coefficients, (b) one-class support vector machine (SVM) and (c) kernel density estimation (KDE). The correlation study could not give any explicit OD result while the one-class SVM and KDE provide average 59.61% and 95.20% DSRs, respectively.

  7. Comparative mapping of Phytophthora resistance loci in pepper germplasm: evidence for conserved resistance loci across Solanaceae and for a large genetic diversity.

    Science.gov (United States)

    Thabuis, A; Palloix, A; Pflieger, S; Daubèze, A-M; Caranta, C; Lefebvre, V

    2003-05-01

    Phytophthora capsici Leonian, known as the causal agent of the stem, collar and root rot, is one of the most serious problems limiting the pepper crop in many areas in the world. Genetic resistance to the parasite displays complex inheritance. Quantitative trait locus (QTL) analysis was performed in three intraspecific pepper populations, each involving an unrelated resistant accession. Resistance was evaluated by artificial inoculations of roots and stems, allowing the measurement of four components involved in different steps of the plant-pathogen interaction. The three genetic maps were aligned using common markers, which enabled the detection of QTLs involved in each resistance component and the comparison of resistance factors existing among the three resistant accessions. The major resistance factor was found to be common to the three populations. Another resistance factor was found conserved between two populations, the others being specific to a single cross. This comparison across intraspecific germplasm revealed a large variability for quantitative resistance loci to P. capsici. It also provided insights both into the allelic relationships between QTLs across pepper germplasm and for the comparative mapping of resistance factors across the Solanaceae.

  8. A comparative study of large-scale atmospheric circulation in the context of a future scenario (RCP4.5 and past warmth (mid-Pliocene

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2013-07-01

    Full Text Available The mid-Pliocene warm period (~ 3.3–3.0 Ma is often considered as the last sustained warm period with close enough geographic configurations compared to the present one associated with atmospheric CO2 concentration (405 ± 50 ppm higher than the modern level. For this reason, this period is often considered as a potential analogue for the future climate warming, with the important advantage that for mid-Pliocene many marine and continental data are available. To investigate this issue, we selected the RCP4.5 scenario, one of the current available future projections, to compare the pattern of tropical atmospheric response with the past warm mid-Pliocene climate. We use three Atmosphere-Ocean General Circulation Model (AOGCM simulations (RCP4.5 scenario, mid-Pliocene and present-day simulation carried out with the IPSL-CM5A model and investigate atmospheric tropical dynamics through Hadley and Walker cell responses to warmer conditions, considering that the analysis can provide some assessment of how these circulations will change in the future. Our results show that there is a damping of the Hadley cell intensity in the northern tropics and an increase in both subtropics. Moreover, northern and southern Hadley cells expand poleward. The response of the Hadley cells is stronger for the RCP4.5 scenario than for the mid-Pliocene, but in very good agreement with the fact that the atmospheric CO2 concentration is higher in the future scenario than in the mid-Pliocene (543 versus 405 ppm. Concerning the response of the Walker cell, we show that despite very large similarities, there are also some differences. Common features to both scenarios are: weakening of the ascending branch, leading to a suppression of the precipitation over the western tropical Pacific. The response of the Walker cell is stronger in the RCP4.5 scenario than in the mid-Pliocene but also depicts some major differences, as an eastward shift of its rising branch in the future

  9. Hepatitis C virus diversification in Argentina: comparative analysis between the large city of Buenos Aires and the small rural town of O'Brien.

    Directory of Open Access Journals (Sweden)

    Marcelo D Golemba

    Full Text Available BACKGROUND: The estimated prevalence of HCV infection in Argentina is around 2%. However, higher rates of infection have been described in population studies of small urban and rural communities. The aim of this work was to compare the origin and diversification of HCV-1b in samples from two different epidemiological scenarios: Buenos Aires, a large cosmopolitan city, and O'Brien, a small rural town with a high prevalence of HCV infection. PATIENTS AND METHODS: The E1/E2 and NS5B regions of the viral genome from 83 patients infected with HCV-1b were sequenced. Phylogenetic analysis and Bayesian Coalescent methods were used to study the origin and diversification of HCV-1b in both patient populations. RESULTS: Samples from Buenos Aires showed a polyphyletic behavior with a tMRCA around 1887-1900 and a time of spread of infection approximately 60 years ago. In contrast, samples from ÓBrien showed a monophyletic behavior with a tMRCA around 1950-1960 and a time of spread of infection more recent than in Buenos Aires, around 20-30 years ago. CONCLUSION: Phylogenetic and coalescence analysis revealed a different behavior in the epidemiological histories of Buenos Aires and ÓBrien. HCV infection in Buenos Aires shows a polyphyletic behavior and an exponential growth in two phases, whereas that in O'Brien shows a monophyletic cluster and an exponential growth in one single step with a more recent tMRCA. The polyphyletic origin and the probability of encountering susceptible individuals in a large cosmopolitan city like Buenos Aires are in agreement with a longer period of expansion. In contrast, in less populated areas such as O'Brien, the chances of HCV transmission are strongly restricted. Furthermore, the monophyletic character and the most recent time of emergence suggest that different HCV-1b ancestors (variants that were in expansion in Buenos Aires had the opportunity to colonize and expand in O'Brien.

  10. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  11. Genomic profiling of plasmablastic lymphoma using array comparative genomic hybridization (aCGH: revealing significant overlapping genomic lesions with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Lu Xin-Yan

    2009-11-01

    Full Text Available Abstract Background Plasmablastic lymphoma (PL is a subtype of diffuse large B-cell lymphoma (DLBCL. Studies have suggested that tumors with PL morphology represent a group of neoplasms with clinopathologic characteristics corresponding to different entities including extramedullary plasmablastic tumors associated with plasma cell myeloma (PCM. The goal of the current study was to evaluate the genetic similarities and differences among PL, DLBCL (AIDS-related and non AIDS-related and PCM using array-based comparative genomic hybridization. Results Examination of genomic data in PL revealed that the most frequent segmental gain (> 40% include: 1p36.11-1p36.33, 1p34.1-1p36.13, 1q21.1-1q23.1, 7q11.2-7q11.23, 11q12-11q13.2 and 22q12.2-22q13.3. This correlated with segmental gains occurring in high frequency in DLBCL (AIDS-related and non AIDS-related cases. There were some segmental gains and some segmental loss that occurred in PL but not in the other types of lymphoma suggesting that these foci may contain genes responsible for the differentiation of this lymphoma. Additionally, some segmental gains and some segmental loss occurred only in PL and AIDS associated DLBCL suggesting that these foci may be associated with HIV infection. Furthermore, some segmental gains and some segmental loss occurred only in PL and PCM suggesting that these lesions may be related to plasmacytic differentiation. Conclusion To the best of our knowledge, the current study represents the first genomic exploration of PL. The genomic aberration pattern of PL appears to be more similar to that of DLBCL (AIDS-related or non AIDS-related than to PCM. Our findings suggest that PL may remain best classified as a subtype of DLBCL at least at the genome level.

  12. The 2-Year Cosmetic Outcome of a Randomized Trial Comparing Prone and Supine Whole-Breast Irradiation in Large-Breasted Women

    Energy Technology Data Exchange (ETDEWEB)

    Veldeman, Liv, E-mail: liv.veldeman@uzgent.be [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Department of Radiotherapy and Experimental Cancer Research, Ghent University, Ghent (Belgium); Schiettecatte, Kimberly; De Sutter, Charlotte; Monten, Christel; Greveling, Annick van [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Berkovic, Patrick [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Department of Radiation Oncology, Centre Hospitalier Universitaire de Liège, Liège (Belgium); Mulliez, Thomas [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); De Neve, Wilfried [Department of Radiation Oncology, University Hospital Ghent, Ghent (Belgium); Department of Radiotherapy and Experimental Cancer Research, Ghent University, Ghent (Belgium)

    2016-07-15

    Purpose: To report the 2-year cosmetic outcome of a randomized trial comparing prone and supine whole-breast irradiation in large-breasted patients. Methods and Materials: One hundred patients with a (European) cup size ≥C were included. Before and 2 years after radiation therapy, clinical endpoints were scored and digital photographs were taken with the arms alongside the body and with the arms elevated 180°. Three observers rated the photographs using the 4-point Harvard cosmesis scale. Cosmesis was also evaluated with the commercially available Breast Cancer Conservation Treatment.cosmetic results (BCCT.core) software. Results: Two-year follow-up data and photographs were available for 94 patients (47 supine treated and 47 prone treated). Patient and treatment characteristics were not significantly different between the 2 cohorts. A worsening of color change occurred more frequently in the supine than in the prone cohort (19/46 vs 10/46 patients, respectively, P=.04). Five patients in the prone group (11%) and 12 patients in the supine group (26%) presented with a worse scoring of edema at 2-year follow-up (P=.06). For retraction and fibrosis, no significant differences were found between the 2 cohorts, although scores were generally worse in the supine cohort. The cosmetic scoring by 3 observers did not reveal differences between the prone and supine groups. On the photographs with the hands up, 7 patients in the supine group versus none in the prone group had a worsening of cosmesis of 2 categories using the (BCCT.org) software (P=.02). Conclusion: With a limited follow-up of 2 years, better cosmetic outcome was observed in prone-treated than in supine-treated patients.

  13. Comparative Effectiveness of Chemotherapy Regimens in Prolonging Survival for Two Large Population-Based Cohorts of Elderly Adults with Breast and Colon Cancer in 1992-2009.

    Science.gov (United States)

    Du, Xianglin L; Zhang, Yefei; Parikh, Rohan C; Lairson, David R; Cai, Yi

    2015-08-01

    To compare the effectiveness of chemotherapy in prolonging survival according to age in breast and colon cancer. Retrospective cohort study with a matched cohort analysis based on the conditional probability of receiving chemotherapy. The 16 Surveillance, Epidemiology, and End Results (SEER) areas from the SEER-Medicare linked database. Women diagnosed with Stage I to IIIa hormone receptor-negative breast cancer (n = 14,440) and 26,893 men and women with Stage III colon cancer (n = 26,893) aged 65 and older in 1992 to 2009. The main exposure was the receipt of chemotherapy, and the main outcome was mortality. In women with breast cancer aged 65 to 69, the risk of all-cause mortality was statistically significantly lower in those who received chemotherapy than in those who did not in the entire cohort (hazard ratio (HR) = 0.70, 95% confidence interval (CI) = 0.57-0.88) and in a propensity-matched cohort (HR = 0.82, 95% CI = 0.70-0.96) after adjusting for measured confounders. These patterns were similar in participants aged 70 to 74 and 75 to 79, but in women aged 80 to 84 and 85 to 89, risk of all-cause mortality was no longer significantly lower in those receiving chemotherapy in the entire and matched cohorts, except that, in a small number of women who received doxorubicin (Adriamycin) and cyclophosphamide (Cytoxan), risk of mortality was significantly lower for those aged 80 to 84. Chemotherapy appeared to be effective in all ages from 65 through 84 in participants with Stage III colon cancer. For example, in those aged 85 to 89, chemotherapy was significantly associated with lower risk of mortality in the entire cohort (HR = 0.79, 95% CI = 0.67-0.92) and the matched cohort (HR = 0.79, 95% CI = 0.66-0.95). The effectiveness of chemotherapy decreased with age in participants with breast cancer, in whom chemotherapy appears to be effective until age 79 except for the doxorubicin-cyclophosphamide combination, which was effective in participants aged 80 to 84. In

  14. Intuitive numbers guide decisions

    Directory of Open Access Journals (Sweden)

    Ellen Peters

    2008-12-01

    Full Text Available Measuring reaction times to number comparisons is thought to reveal a processing stage in elementary numerical cognition linked to internal, imprecise representations of number magnitudes. These intuitive representations of the mental number line have been demonstrated across species and human development but have been little explored in decision making. This paper develops and tests hypotheses about the influence of such evolutionarily ancient, intuitive numbers on human decisions. We demonstrate that individuals with more precise mental-number-line representations are higher in numeracy (number skills consistent with previous research with children. Individuals with more precise representations (compared to those with less precise representations also were more likely to choose larger, later amounts over smaller, immediate amounts, particularly with a larger proportional difference between the two monetary outcomes. In addition, they were more likely to choose an option with a larger proportional but smaller absolute difference compared to those with less precise representations. These results are consistent with intuitive number representations underlying: a perceived differences between numbers, b the extent to which proportional differences are weighed in decisions, and, ultimately, c the valuation of decision options. Human decision processes involving numbers important to health and financial matters may be rooted in elementary, biological processes shared with other species.

  15. Radiation shielding and effective atomic number studies in different types of shielding concretes, lead base and non-lead base glass systems for total electron interaction: A comparative study

    International Nuclear Information System (INIS)

    Kurudirek, Murat

    2014-01-01

    Highlights: • Radiation shielding calculations for concretes and glass systems. • Assigning effective atomic number for the given materials for total electron interaction. • Glass systems generally have better shielding ability than concretes. - Abstract: Concrete has been widely used as a radiation shielding material due to its extremely low cost. On the other hand, glass systems, which make everything inside visible to observers, are considered as promising shielding materials as well. In the present work, the effective atomic numbers, Z eff of some concretes and glass systems (industrial waste containing glass, Pb base glass and non-Pb base glass) have been calculated for total electron interaction in the energy region of 10 keV–1 GeV. Also, the continuous slowing down approximation (CSDA) ranges for the given materials have been calculated in the wide energy region to show the shielding effectiveness of the given materials. The glass systems are not only compared to different types of concretes but also compared to the lead base glass systems in terms of shielding. Moreover, the obtained results for total electron interaction have been compared to the results for total photon interaction wherever possible. In general, it has been observed that the glass systems have superior properties than most of the concretes over the high-energy region with respect to the electron interaction. Also, glass systems without lead show better electron stopping than lead base glasses at some energy regions as well. Along with the photon attenuation capability, it is seen that Fly Ash base glass systems have not only greater electron stopping capability but also have greater photon attenuation especially in high energy region when compared with standard shielding concretes

  16. Radiation shielding and effective atomic number studies in different types of shielding concretes, lead base and non-lead base glass systems for total electron interaction: A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Kurudirek, Murat, E-mail: mkurudirek@gmail.com

    2014-12-15

    Highlights: • Radiation shielding calculations for concretes and glass systems. • Assigning effective atomic number for the given materials for total electron interaction. • Glass systems generally have better shielding ability than concretes. - Abstract: Concrete has been widely used as a radiation shielding material due to its extremely low cost. On the other hand, glass systems, which make everything inside visible to observers, are considered as promising shielding materials as well. In the present work, the effective atomic numbers, Z{sub eff} of some concretes and glass systems (industrial waste containing glass, Pb base glass and non-Pb base glass) have been calculated for total electron interaction in the energy region of 10 keV–1 GeV. Also, the continuous slowing down approximation (CSDA) ranges for the given materials have been calculated in the wide energy region to show the shielding effectiveness of the given materials. The glass systems are not only compared to different types of concretes but also compared to the lead base glass systems in terms of shielding. Moreover, the obtained results for total electron interaction have been compared to the results for total photon interaction wherever possible. In general, it has been observed that the glass systems have superior properties than most of the concretes over the high-energy region with respect to the electron interaction. Also, glass systems without lead show better electron stopping than lead base glasses at some energy regions as well. Along with the photon attenuation capability, it is seen that Fly Ash base glass systems have not only greater electron stopping capability but also have greater photon attenuation especially in high energy region when compared with standard shielding concretes.

  17. Comparative study of surface-lattice-site resolved neutralization of slow multicharged ions during large-angle quasi-binary collisions with Au(1 1 0): Simulation and experiment

    International Nuclear Information System (INIS)

    Meyer, F.W.; Morozov, V.A.

    2002-01-01

    In this article we extend our earlier studies of the azimuthal dependences of low energy projectiles scattered in large angle quasi-binary collisions (BCs) from Au(1 1 0). Measurements are presented for 20 keV Ar 9+ at normal incidence, which are compared with our earlier measurements for this ion at 5 keV and 10 deg. incidence angle. A deconvolution procedure based on MARLOWE simulation results carried out at both energies provides information about the energy dependence of projectile neutralization during interactions just with the atoms along the top ridge of the reconstructed Au(1 1 0) surface corrugation, in comparison to, e.g. interactions with atoms lying on the sidewalls. To test the sensitivity of the agreement between the MARLOWE results and the experimental measurements, we show simulation results obtained for a non-reconstructed Au(1 1 0) surface with 20 keV Ar projectiles, and for different scattering potentials that are intended to simulate the effects on scattering trajectory of a projectile inner shell vacancy surviving the BC. In addition, simulation results are shown for a number of different total scattering angles, to illustrate their utility in finding optimum values for this parameter prior to the actual measurements

  18. Less initial rejoining of X-ray-induced DNA double-strand breaks in cells of a small cell (U-1285) compared to a large cell (U-1810) lung carcinoma cell line

    International Nuclear Information System (INIS)

    Cedervall, B.; Sirzea, F.; Brodin, O.; Lewensohn, R.

    1994-01-01

    Cells of a small cell lung carcinoma cell line, U-1285, and an undifferentiated large cell lung carcinoma cell line, U-1810, differ in radiosensitivity in parallel to the clinical radiosensitivity of the kind of tumors from which they are derived. The surviving fraction at 2 Gy (SF2) was 0.25 that of U-1285 cells and 0.88 that of U-1810 cells. We investigated the induction of DNA double-strand breaks (DSBs) by X rays and DSB rejoining in these cell lines. To estimate the number of DSBs we used a model adapted for pulsed-field gel electrophoresis (PFGE). The induction levels were of the same magnitude. These levels of induction do not correlate with radiosensitivity as measured by cell survival assays. Rejoining of DSBs after doses in the range of 0.50 Gy was followed for 0,15,30,60 and 120 min. We found a difference in the velocity of repair during the first hour after irradiation which is parallel to the differences in radiosensitivity. Thus U-1810 cells exhibit a fast component of repair, with about half of the DSBs being rejoined during the first 15 min, whereas U-1285 cells lack such a fast component, with only about 5% of the DSBs being rejoined after the same time. In addition there was a numerical albeit not statistical difference at 120 min, with more residual DSBs in the U-1285 cells compared to the U-1810 cells. 36 refs., 5 figs

  19. The distribution and function of serotonin in the large milkweed bug, Oncopeltus fasciatus. a comparative study with the blood-feeding bug, Rhodnius prolixus.

    Science.gov (United States)

    Miggiani, L; Orchard, I; TeBrugge, V

    1999-11-01

    The blood-feeding hemipteran, Rhodnius prolixus, ingests a large blood meal at the end of each larval stage. To accommodate and process this meal, its cuticle undergoes plasticisation, and its gut and Malpighian tubules respectively absorb and secrete a large volume of water and salts for rapid diuresis. Serotonin has been found to be integral to the feeding process in this animal, along with a diuretic peptide(s). The large milkweed bug, Oncopeltus fasciatus, tends to feed in a more continuous and abstemious manner, and therefore may have different physiological requirements than the blood feeder. Unlike R. prolixus, O. fasciatus is lacking serotonin-like immunoreactive dorsal unpaired median neurons in the mesothoracic ganglionic mass, and lacks serotonin-like immunoreactive neurohaemal areas and processes on the abdominal nerves, integument, salivary glands, and anterior junction of the foregut and crop. The salivary glands and crop do, however, respond to serotonin with increased levels of cAMP, while the integument and Malpighian tubules do not. In addition, O. fasciatus Malpighian tubules respond to both O. fasciatus and R. prolixus partially purified CNS extracts, which are likely to contain any native diuretic peptides. Thus, while serotonin and diuretic peptides may be involved in tubule control in R. prolixus, the latter may be of greater importance in O. fasciatus.

  20. Investigation of the effective atomic numbers of dosimetric materials for electrons, protons and alpha particles using a direct method in the energy region 10 keV-1 GeV: a comparative study.

    Science.gov (United States)

    Kurudirek, Murat; Aksakal, Oğuz; Akkuş, Tuba

    2015-11-01

    A direct method has been used for the first time, to compute effective atomic numbers (Z eff) of water, air, human tissues, and some organic and inorganic compounds, for total electron proton and alpha particle interaction in the energy region 10 keV-1 GeV. The obtained values for Z eff were then compared to those obtained using an interpolation procedure. In general, good agreement has been observed for electrons, and the difference (%) in Z eff between the results of the direct and the interpolation method was found to be energy range from 10 keV to 1 MeV. More specifically, results of the two methods were found to agree well (Dif. energy region with respect to the total electron interaction. On the other hand, values for Z eff calculated using both methods for protons and alpha particles generally agree with each other in the high-energy region above 10 MeV.

  1. Fresh Frozen Plasma Resuscitation Provides Neuroprotection Compared to Normal Saline in a Large Animal Model of Traumatic Brain Injury and Polytrauma

    DEFF Research Database (Denmark)

    Imam, Ayesha; Jin, Guang; Sillesen, Martin

    2015-01-01

    Abstract We have previously shown that early treatment with fresh frozen plasma (FFP) is neuroprotective in a swine model of hemorrhagic shock (HS) and traumatic brain injury (TBI). However, it remains unknown whether this strategy would be beneficial in a more clinical polytrauma model. Yorkshire...... as well as cerebral perfusion pressures. Levels of cerebral eNOS were higher in the FFP-treated group (852.9 vs. 816.4 ng/mL; p=0.03), but no differences in brain levels of ET-1 were observed. Early administration of FFP is neuroprotective in a complex, large animal model of polytrauma, hemorrhage...

  2. Assessing the Impact of Forest Change and Climate Variability on Dry Season Runoff by an Improved Single Watershed Approach: A Comparative Study in Two Large Watersheds, China

    Directory of Open Access Journals (Sweden)

    Yiping Hou

    2018-01-01

    Full Text Available Extensive studies on hydrological responses to forest change have been published for centuries, yet partitioning the hydrological effects of forest change, climate variability and other factors in a large watershed remains a challenge. In this study, we developed a single watershed approach combining the modified double mass curve (MDMC and the time series multivariate autoregressive integrated moving average model (ARIMAX to separate the impact of forest change, climate variability and other factors on dry season runoff variation in two large watersheds in China. The Zagunao watershed was examined for the deforestation effect, while the Meijiang watershed was examined to study the hydrological impact of reforestation. The key findings are: (1 both deforestation and reforestation led to significant reductions in dry season runoff, while climate variability yielded positive effects in the studied watersheds; (2 the hydrological response to forest change varied over time due to changes in soil infiltration and evapotranspiration after vegetation regeneration; (3 changes of subalpine natural forests produced greater impact on dry season runoff than alteration of planted forests. These findings are beneficial to water resource and forest management under climate change and highlight a better planning of forest operations and management incorporated trade-off between carbon and water in different forests.

  3. At-Risk Screened Children with Celiac Disease are Comparable in Disease Severity and Dietary Adherence to Those Found because of Clinical Suspicion: A Large Cohort Study.

    Science.gov (United States)

    Kivelä, Laura; Kaukinen, Katri; Huhtala, Heini; Lähdeaho, Marja-Leena; Mäki, Markku; Kurppa, Kalle

    2017-04-01

    To assess whether children at risk for celiac disease should be screened systematically by comparing their baseline and follow-up characteristics to patients detected because of clinical suspicion. Five hundred four children with celiac disease were divided into screen-detected (n = 145) and clinically detected cohorts (n = 359). The groups were compared for clinical, serologic, and histologic characteristics and laboratory values. Follow-up data regarding adherence and response to gluten-free diet were compared. Subgroup analyses were made between asymptomatic and symptomatic screen-detected patients. Of screen-detected patients, 51.8% had symptoms at diagnosis, although these were milder than in clinically detected children (P celiac disease had symptoms unrecognized at diagnosis. The severity of histologic damage, antibody levels, dietary adherence, and response to treatment in screen-detected cases is comparable with those detected on a clinical basis. The results support active screening for celiac disease among at-risk children. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Recurrent respiratory papillomatosis: a longitudinal study comparing severity associated with human papilloma viral types 6 and 11 and other risk factors in a large pediatric population.

    Science.gov (United States)

    Wiatrak, Brian J; Wiatrak, Deborah W; Broker, Thomas R; Lewis, Linda

    2004-11-01

    A database was developed for prospective, longitudinal study of recurrent respiratory papillomatosis (RRP) in a large population of pediatric patients. Data recorded for each patient included epidemiological factors, human papilloma virus (HPV) type, clinical course, staged severity of disease at each surgical intervention, and frequency of surgical intervention. The study hypothesizes that patients with HPV type 11 (HPV-11) and patients younger than 3 years of age at diagnosis are at risk for more aggressive and extensive disease. The 10-year prospective epidemiological study used disease staging for each patient with an original scoring system. Severity scores were updated at each surgical procedure. Parents of children with RRP referred to the authors' hospital completed a detailed epidemiological questionnaire at the initial visit or at the first return visit after the study began. At the first endoscopic debridement after study enrollment, tissue was obtained and submitted for HPV typing using polymerase chain reaction techniques and in situ hybridization. Staging of disease severity was performed in real time at each endoscopic procedure using an RRP scoring system developed by one of the authors (B.J.W.). The frequency of endoscopic operative debridement was recorded for each patient. Information in the database was analyzed to identify statistically significant relationships between extent of disease and/or HPV type, patient age at diagnosis, and selected epidemiological factors. The study may represent the first longitudinal prospective analysis of a large pediatric RRP population. Fifty-eight of the 73 patients in the study underwent HPV typing. Patients infected with HPV-11 were significantly more likely to have higher severity scores, require more frequent surgical intervention, and require adjuvant therapy to control disease progression. In addition, patients with HPV-11 RRP were significantly more likely to develop tracheal disease, to require

  5. Sequential alternating deferiprone and deferoxamine treatment compared to deferiprone monotherapy: main findings and clinical follow-up of a large multicenter randomized clinical trial in -thalassemia major patients

    DEFF Research Database (Denmark)

    Pantalone, Gaetano Restivo; Maggio, Aurelio; Vitrano, Angela

    2011-01-01

    In β-thalassemia major (β-TM) patients, iron chelation therapy is mandatory to reduce iron overload secondary to transfusions. Recommended first line treatment is deferoxamine (DFO) from the age of 2 and second line treatment after the age of 6 is deferiprone (L1). A multicenter randomized open...... thalassemia patients were randomized and underwent intention-to-treat analysis. Statistically, a decrease of serum ferritin level was significantly higher in alternating sequential L1-DFO patients compared with L1 alone patients (p = 0.005). Kaplan-Meier survival analysis for the two chelation treatments did...

  6. Nitrogen-detected TROSY yields comparable sensitivity to proton-detected TROSY for non-deuterated, large proteins under physiological salt conditions

    Energy Technology Data Exchange (ETDEWEB)

    Takeuchi, Koh [National Institute for Advanced Industrial Science and Technology, Molecular Profiling Research Center for Drug Discovery (Japan); Arthanari, Haribabu [Harvard Medical School, Department of Biochemistry and Molecular Pharmacology (United States); Imai, Misaki [Japan Biological Informatics Consortium, Research and Development Department (Japan); Wagner, Gerhard, E-mail: gerhard-wagner@hms.harvard.edu [Harvard Medical School, Department of Biochemistry and Molecular Pharmacology (United States); Shimada, Ichio, E-mail: shimada@iw-nmr.f.u-tokyo.ac.jp [National Institute for Advanced Industrial Science and Technology, Molecular Profiling Research Center for Drug Discovery (Japan)

    2016-02-15

    Direct detection of the TROSY component of proton-attached {sup 15}N nuclei ({sup 15}N-detected TROSY) yields high quality spectra with high field magnets, by taking advantage of the slow {sup 15}N transverse relaxation. The slow transverse relaxation and narrow line width of the {sup 15}N-detected TROSY resonances are expected to compensate for the inherently low {sup 15}N sensitivity. However, the sensitivity of {sup 15}N-detected TROSY in a previous report was one-order of magnitude lower than in the conventional {sup 1}H-detected version. This could be due to the fact that the previous experiments were performed at low salt (0–50 mM), which is advantageous for {sup 1}H-detected experiments. Here, we show that the sensitivity gap between {sup 15}N and {sup 1}H becomes marginal for a non-deuterated, large protein (τ{sub c} = 35 ns) at a physiological salt concentration (200 mM). This effect is due to the high salt tolerance of the {sup 15}N-detected TROSY. Together with the previously reported benefits of the {sup 15}N-detected TROSY, our results provide further support for the significance of this experiment for structural studies of macromolecules when using high field magnets near and above 1 GHz.

  7. Comparative study of large samples (2'' × 2'') plastic scintillators and EJ309 liquid with pulse shape discrimination (PSD) capabilities

    International Nuclear Information System (INIS)

    Iwanowska-Hanke, J; Moszynski, M; Swiderski, L; Sibczynski, P; Szczesniak, T; Krakowski, T; Schotanus, P

    2014-01-01

    In the paper we reported on the scintillation properties and pulse shape discrimination (PSD) performance of new plastic scintillators. The samples with dimension of 2 inches × 2 inches were tested: EJ299-34, EJ299-34G, EJ299-33 and EJ299-33G. They are the first commercially available plastics with neutron/gamma discrimination properties. The paper covers the measurements of emission spectra, photoelectron yield, analysis of the light pulse shapes originating from events related to gamma-rays and fast neutrons as well as neutron/gamma discrimination. The tested plastics are characterized by a photoelectron yield on a level of approximately 1600-2500 phe/MeV, depending on the sample. The highest value, measured for EJ299-34, is similar to the number of photoelectrons measured for EJ309 (2600 phe/MeV). The figure of merit (FOM) calculated for narrow energy cuts — indicating the PSD performance — showed that the PSD capabilities of the plastics are significantly lower than of EJ309. These scintillators are still under development in order to optimize the composition and manufacturing procedures. At this time the results obtained with the new plastics suggest their possible use as an alternative for liquid scintillators, especially if we consider their inflammability and non-toxicity

  8. Mass attenuation coefficient (μ/ρ), effective atomic number (Z{sub eff}) and measurement of x-ray energy spectra using based calcium phosphate biomaterials: a comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes Z, M. A.; Da Silva, T. A.; Nogueira, M. S. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos 6627, Belo Horizonte 31270-901, Minas Gerais (Brazil); Goncalves Z, E., E-mail: madelon@cdtn.br [Pontifice Catholic University of Minas Gerais, Av. Dom Jose Gaspar 500, Belo Horizonte 30535-901, Minas Gerais (Brazil)

    2015-10-15

    In dentistry, alveolar bone regeneration procedures using based calcium phosphate biomaterials have been shown effective. However,there are not reports in the literature of studies the interaction of low energy radiation in these biomaterials used as attenuator and not being then allowed a comparison between the theoretical values and experimental.The objective of this study was to determine the interaction of radiation parameters of four dental biomaterials - BioOss, Cerasorb M Dental, Straumann Boneceramic and Osteogen for diagnostic radiology qualities. As a material and methods, the composition of the biomaterials was determined by the analytical techniques. The samples with 0.181 cm to 0,297 cm thickness were experimentally used as attenuators for the measurement of the transmitted X-rays spectra in X-ray equipment with 50 to 90 kV range by spectrometric system comprising the Cd Te detector. After this procedure, the mass attenuation coefficient, the effective atomic number were determined and compared between all the specimens analyzed, using the program WinXCOM in the range of 10 to 200 keV. In all strains examined observed that the energy spectrum of x-rays transmitted through the BioOss has the mean energy slightly smaller than the others biomaterials for close thickness. The μ/ρ and Z{sub eff} of the biomaterials showed its dependence on photon energy and atomic number of the elements of the material analyzed. It is concluded according to the methodology employed in this study that the measurements of x-ray spectrum, μ/ρ and Z{sub eff} using biomaterials as attenuators confirmed that the thickness, density, composition of the samples, the incident photon energy are factors that determine the characteristics of radiation in a tissue or equivalent material. (Author)

  9. Setting a national minimum standard for health benefits: how do state benefit mandates compare with benefits in large-group plans?

    Science.gov (United States)

    Frey, Allison; Mika, Stephanie; Nuzum, Rachel; Schoen, Cathy

    2009-06-01

    Many proposed health insurance reforms would establish a federal minimum benefit standard--a baseline set of benefits to ensure that people have adequate coverage and financial protection when they purchase insurance. Currently, benefit mandates are set at the state level; these vary greatly across states and generally target specific areas rather than set an overall standard for what qualifies as health insurance. This issue brief considers what a broad federal minimum standard might look like by comparing existing state benefit mandates with the services and providers covered under the Federal Employees Health Benefits Program (FEHBP) Blue Cross and Blue Shield standard benefit package, an example of minimum creditable coverage that reflects current standard practice among employer-sponsored health plans. With few exceptions, benefits in the FEHBP standard option either meet or exceed those that state mandates require-indicating that a broad-based national benefit standard would include most existing state benefit mandates.

  10. A comparative study teaching chemistry using the 5E learning cycle and traditional teaching with a large English language population in a middle-school setting

    Science.gov (United States)

    McWright, Cynthia Nicole

    For decades science educators and educational institutions have been concerned with the status of science content being taught in K-12 schools and the delivery of the content. Thus, educational reformers in the United States continue to strive to solve the problem on how to best teach science for optimal success in learning. The constructivist movement has been at the forefront of this effort. With mandatory testing nationwide and an increase in science, technology, engineering, and mathematics (STEM) jobs with little workforce to fulfill these needs, the question of what to teach and how to teach science remains a concern among educators and all stakeholders. The purpose of this research was to determine if students' chemistry knowledge and interest can be increased by using the 5E learning cycle in a middle school with a high population of English language learners. The participants were eighth-grade middle school students in a large metropolitan area. Students participated in a month-long chemistry unit. The study was a quantitative, quasi-experimental design with a control group using a traditional lecture-style teaching strategy and an experimental group using the 5E learning cycle. Students completed a pre-and post-student attitude in science surveys, a pretest/posttest for each mini-unit taught and completed daily exit tickets using the Expert Science Teaching Educational Evaluation Model (ESTEEM) instrument to measure daily student outcomes in main idea, student inquiry, and relevancy. Analysis of the data showed that there was no statistical difference between the two groups overall, and all students experienced a gain in content knowledge overall. All students demonstrated a statistically significant difference in their interest in science class, activities in science class, and outside of school. Data also showed that scores in writing the main idea and writing inquiry questions about the content increased over time.

  11. Comparative data on SD-OCT for the retinal nerve fiber layer and retinal macular thickness in a large cohort with Marfan syndrome.

    Science.gov (United States)

    Xu, WanWan; Kurup, Sudhi P; Fawzi, Amani A; Durbin, Mary K; Maumenee, Irene H; Mets, Marilyn B

    2017-01-01

    To report the distribution of macular and optic nerve topography in the eyes of individuals with Marfan syndrome aged 8-56 years using spectral domain optical coherence tomography (SD-OCT). Thirty-three patients with Marfan syndrome underwent a full eye examination including slit-lamp biomicroscopy, indirect ophthalmoscopy, and axial length measurement; and SD-OCT measurements of the retinal nerve fiber layer (RNFL) and macular thickness. For patients between the ages of 8 and 12 years, the average RNFL thickness is 98 ± 9 μm, the vertical cup to disc (C:D) ratio is 0.50 ± 0.10, the central subfield thickness (CST) is 274 ± 38 μm, and the macular volume is 10.3 ± 0.6 mm 3 . For patients between the ages of 13 and 17 years, the average RNFL is 86 ± 16 μm, the vertical C:D ratio is 0.35 ± 0.20, the CST is 259 ± 15 μm, and the macular volume is 10.1 ± 0.5 mm 3 . For patients 18 years or older, the average RNFL is 89 ± 12 μm, the vertical C:D ratio is 0.46 ± 0.18, the CST is 262 ± 20 μm, and the macular volume is 10.2 ± 0.4 mm 3 . When the average RNFL data are compared to a normative, age-adjusted database, 6 of 33 (18%) were thinner than the 5% limit. This study reports the distribution of SD-OCT data for patients with Marfan syndrome. Compared to a normative database, 18% of eyes with Marfan syndrome had RNFL thickness < 5% of the population.

  12. APCI as an innovative ionization mode compared with EI and CI for the analysis of a large range of organophosphate esters using GC-MS/MS.

    Science.gov (United States)

    Halloum, Wafaa; Cariou, Ronan; Dervilly-Pinel, Gaud; Jaber, Farouk; Le Bizec, Bruno

    2017-01-01

    Organophosphate esters (OPEs) are chemical compounds incorporated into materials as flame-proof and/or plasticizing agents. In this work, 13 non-halogenated and 5 halogenated OPEs were studied. Their mass spectra were interpreted and compared in terms of fragmentation patterns and dominant ions via various ionization techniques [electron ionization (EI) and chemical ionization (CI) under vacuum and corona discharge atmospheric pressure chemical ionization (APCI)] on gas chromatography coupled to mass spectrometry (GC-MS). The novelty of this paper relies on the investigation of APCI technique for the analysis of OPEs via favored protonation mechanism, where the mass spectra were mostly dominated by the quasi-molecular ion [M + H] + . The EI mass spectra were dominated by ions such as [H 4 PO 4 ] + , [M-R] + , [M-Cl] + , and [M-Br] + , and for some non-halogenated aryl OPEs, [M] +● was also observed. The CI mass spectra in positive mode were dominated by [M + H] + and sometimes by [M-R] + , while in negative mode, [M-R] - and more particularly [X] - and [X 2 ] -● were mainly observed for the halogenated OPEs. Both EI and APCI techniques showed promising results for further development of instrumental method operating in selective reaction monitoring mode. Instrumental detection limits by using APCI mode were 2.5 to 25 times lower than using EI mode for the non-brominated OPEs, while they were determined at 50-100 times lower by the APCI mode than by the EI mode, for the two brominated OPEs. The method was applied to fish samples, and monitored transitions by using APCI mode showed higher specificity but lower stability compared with EI mode. The sensitivity in terms of signal-to-noise ratio varying from one compound to another. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Comparative Long-term Study of a Large Series of Patients with Invasive Ductal Carcinoma and Invasive Lobular Carcinoma. Loco-Regional Recurrence, Metastasis, and Survival.

    Science.gov (United States)

    García-Fernández, Antonio; Lain, Josep María; Chabrera, Carol; García Font, Marc; Fraile, Manel; Barco, Israel; Torras, Merçe; Reñe, Asumpta; González, Sonia; González, Clarissa; Piqueras, Mercedes; Veloso, Enrique; Cirera, Lluís; Pessarrodona, Antoni; Giménez, Nuria

    2015-01-01

    Our aim was to compare histologic and immunohistochemical features, surgical treatment and clinical course, including disease recurrence, distant metastases, and mortality between patients with invasive ductal carcinoma (IDC) or invasive lobular carcinoma (ILC). We included 1,745 patients operated for 1,789 breast tumors, with 1,639 IDC (1,600 patients) and 145 patients with ILC and 150 breast tumors. The median follow-up was 76 months. ILC was significantly more likely to be associated with a favorable phenotype. Prevalence of contralateral breast cancer was slightly higher for ILC patients than for IDC patients (4.0% versus 3.2%; p = n.s). ILC was more likely multifocal, estrogen receptor positive, Human Epidermal Growth Factor Receptor-2 (HER2) negative, and with lower proliferative index compared to IDC. Considering conservative surgery, ILC patients required more frequently re-excision and/or mastectomy. Prevalence of stage IIB and III stages were significantly more frequent in ILC patients than in IDC patients (37.4% versus 25.3%, p = 0.006). Positive nodes were significantly more frequent in the ILC patients (44.6% versus 37.0%, p = 0.04). After adjustment for tumor size and nodal status, frequencies of recurrence/metastasis, disease-free and specific survival were similar among patients with IDC and patients with ILC. In conclusion, women with ILC do not have worse clinical outcomes than their counterparts with IDC. Management decisions should be based on individual patient and tumor biologic characteristics rather than on lobular versus ductal histology. © 2015 Wiley Periodicals, Inc.

  14. Superior Effects of Antiretroviral Treatment among Men Who have Sex with Men Compared to Other HIV At-Risk Populations in a Large Cohort Study in Hunan, China

    Directory of Open Access Journals (Sweden)

    Shu Su

    2016-03-01

    Full Text Available This study assesses association between CD4 level at initiation of antiretroviral treatment (ART on subsequent treatment outcomes and mortality among people infected with HIV via various routes in Hunan province, China. Over a period of 10 years, a total of 7333 HIV-positive patients, including 553 (7.5% MSM, 5484 (74.8% heterosexuals, 1164 (15.9% injection drug users (IDU and 132 (1.8% former plasma donors (FPD, were recruited. MSM substantially demonstrated higher initial CD4 cell level (242, IQR 167–298 than other populations (Heterosexuals: 144 IQR 40–242, IDU: 134 IQR 38–224, FPD: 86 IQR 36–181. During subsequent long-term follow up, the median CD4 level in all participants increased significantly from 151 cells/mm3 (IQR 43–246 to 265 cells/mm3 (IQR 162–380, whereas CD4 level in MSM remained at a high level between 242 and 361 cells/mm3. Consistently, both cumulative immunological and virological failure rates (10.4% and 26.4% in 48 months, respectively were the lowest in MSM compared with other population groups. Survival analysis indicated that initial CD4 counts ≤200 cells/mm3 (AHR = 3.14; CI, 2.43–4.06 significantly contributed to HIV-related mortality during treatment. Timely diagnosis and treatment of HIV patients are vital for improving CD4 level and health outcomes.

  15. Water-perfused manometry vs three-dimensional high-resolution manometry: a comparative study on a large patient population with anorectal disorders.

    Science.gov (United States)

    Vitton, V; Ben Hadj Amor, W; Baumstarck, K; Grimaud, J-C; Bouvier, M

    2013-12-01

    Our aim was to compare for the first time measurements obtained with water-perfused catheter anorectal manometry and three-dimensional (3D) high-resolution manometry in patients with anorectal disorders. Consecutive patients referred to our centre for anorectal manometry (ARM) were recruited to undergo the two procedures successively. Conventional manometry was carried out using a water-perfused catheter (WPAM) and high-resolution manometry was achieved with a 3D probe (3DHRAM). For each procedure, parameters recorded included the following: anal canal length, resting pressure, squeeze pressure and rectal sensitivity. Two hundred and one patients were included in this study. The mean values for resting and squeeze pressures were correlated and found to be significantly higher when measured with 3DHRAM than with WPAM. However, the length of the anal canal was not significantly different when measured by the two techniques without correlation between the two mean values obtained. The presence of the rectoanal inhibitory reflex was systematically assessed by both WPAM and 3DHRAM and anismus was also systematically diagnosed by both WPAM and 3DHRAM. The pressure values obtained with 3DHRAM are correlated with those measured with conventional manometry but are systematically higher. 3DHRAM has the advantage of providing a pressure recording over the entire length and circumference of the anal canal, allowing a more useful physiological assessment of anorectal function. Colorectal Disease © 2013 The Association of Coloproctology of Great Britain and Ireland.

  16. Estimating cyclopoid copepod species richness and geographical distribution (Crustacea across a large hydrographical basin: comparing between samples from water column (plankton and macrophyte stands

    Directory of Open Access Journals (Sweden)

    Gilmar Perbiche-Neves

    2014-06-01

    Full Text Available Species richness and geographical distribution of Cyclopoida freshwater copepods were analyzed along the "La Plata" River basin. Ninety-six samples were taken from 24 sampling sites, twelve sites for zooplankton in open waters and twelve sites for zooplankton within macrophyte stands, including reservoirs and lotic stretches. There were, on average, three species per sample in the plankton compared to five per sample in macrophytes. Six species were exclusive to the plankton, 10 to macrophyte stands, and 17 were common to both. Only one species was found in similar proportions in plankton and macrophytes, while five species were widely found in plankton, and thirteen in macrophytes. The distinction between species from open water zooplankton and macrophytes was supported by nonmetric multidimensional analysis. There was no distinct pattern of endemicity within the basin, and double sampling contributes to this result. This lack of sub-regional faunal differentiation is in accordance with other studies that have shown that cyclopoids generally have wide geographical distribution in the Neotropics and that some species there are cosmopolitan. This contrasts with other freshwater copepods such as Calanoida and some Harpacticoida. We conclude that sampling plankton and macrophytes together provided a more accurate estimate of the richness and geographical distribution of these organisms than sampling in either one of those zones alone.

  17. Comparative study between Federer and Gomez method for number of replication in complete randomized design using simulation: study of Areca Palm (Areca catechu) as organic waste for producing handicraft paper

    Science.gov (United States)

    Ihwah, A.; Deoranto, P.; Wijana, S.; Dewi, I. A.

    2018-03-01

    The part of Areca Palm (Areca catechu) that economical is the seed. It is commercially available in dried, cured and fresh forms, while the fibre is usually thrown away. Cellulose fibers from agricultural waste can be utilized as raw material for handicraft paper. Laboratory research showed that Areca palm fibre contained 70.2% of cellulose, 10.92% of water, and 6.02% of ash. This indicated that Areca palm fibre is very potential to be processed as handicraft paper. Handicraft paper is made of wastepaper or plants which cointain celluloce to produce rough-textured paper. In order to obtain preferred sensory quality of handicraft paper such as color, fiber appearance and texture as well as good physical quantity such as tensile strength, tear resistance and grammage, the addition of wastepaper to provide secondary fibre and sometimes adhesive are needed in making handicraft paper. Handicraft paper making was one alternative to treat the solid waste and to reduce the use of wood fiber as paper raw material. The aim of this study is to compare the two most famous method, i.e. Federer and Gomez Method, for calculate the number of replications. This study is preliminary research before do the research in order to get the best treatment to produce handicraft paper. The Gomez method calculates fewer replications than the Federer method. Based on data simulation the error generated using 3 replicates of 0.0876 while using 2 replicates of 0.1032.

  18. 3rd year final contractor report for: U.S. Department of Energy Stewardship Science Academic Alliances Program Project Title: Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm J. Andrews

    2006-01-01

    This project had two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. This report describes work done in the last twelve (12) months of the project, and also contains a summary of the complete work done over the three (3) life of the project. As of April 1, 2006, the air/helium facility (Task 1) is now complete and extensive testing and validation of diagnostics has been performed. Initial condition studies (Task 2) is also complete. Detailed experiments with air/helium with Atwood numbers up to 0.1 have been completed, and Atwood numbers of 0.25. Within the last three (3) months we have been able to successfully run the facility at Atwood numbers of 0.5. The progress matches the project plan, as does the budget. We have finished the initial condition studies using the water channel, and this work has been accepted for publication on the Journal of Fluid Mechanics (the top fluid mechanics journal). Mr. Nick Mueschke and Mr. Wayne Kraft are continuing with their studies to obtain PhDs in the same field, and will also continue their collaboration visits to LANL and LLNL. Over its three (3) year life the project has supported two(2) Ph.D.'s and three (3) MS's, and produced nine (9) international journal publications, twenty four (24) conference publications, and numerous other reports. The highlight of the project has been our close collaboration with LLNL (Dr. Oleg Schilling) and LANL (Drs. Dimonte, Ristorcelli, Gore, and Harlow)

  19. CD4+ T follicular helper and IgA+ B cell numbers in gut biopsies from HIV-infected subjects on antiretroviral therapy are comparable to HIV-uninfected individuals

    Directory of Open Access Journals (Sweden)

    John Zaunders

    2016-10-01

    Full Text Available Background: Disruption of gastrointestinal tract epithelial and immune barriers contribute to microbial translocation, systemic inflammation and progression of HIV-1 infection. Antiretroviral therapy (ART may lead to reconstitution of CD4+ T cells in gastrointestinal-associated lymphoid tissue (GALT, but its impact on humoral immunity within GALT is unclear. Therefore we studied CD4+ subsets, including T follicular helper cells (Tfh, as well as resident B cells that have switched to IgA production, in gut biopsies, from HIV+ subjects on suppressive ART, compared to HIV-negative controls.Methods: 23 HIV+ subjects on ART and 22 HIV-negative controls (HNC undergoing colonoscopy were recruited to the study. Single cell suspensions were prepared from biopsies from left colon (LC, right colon (RC and terminal ileum (TI. T and B lymphocyte subsets, as well as EpCAM+ epithelial cells, were accurately enumerated by flow cytometry, using counting beads. Results: No significant differences in the number of recovered epithelial cells were observed between the two subject groups. However, the median TI CD4+ T cell count/106 epithelial cells was 2.4-fold lower in HIV+ subjects versus HNC (19,679 vs 47,504 cells; p=0.02. Similarly, median LC CD4+ T cell counts were reduced in HIV+ subjects (8,358 vs 18,577; p=0.03, but were not reduced in RC. Importantly, we found no significant differences in Tfh or IgA+ B cell counts at either site between HIV+ subjects and HNC. Further analysis showed no difference in CD4+, Tfh or IgA+ B cell counts between subjects who commenced ART in primary compared to chronic HIV-1 infection. Despite the decrease in total CD4 T cells, we could not identify a selective decrease of other key subsets of CD4+ T cells, including: CCR5+ cells; CD127+ long-term memory cells; CD103+ tissue resident cells; or CD161+ cells (surrogate marker for Th17, but there was a slight increase in the proportion of T regulatory cells. Conclusion: While there

  20. Comparing the mannitol-egg yolk-polymyxin agar plating method with the three-tube most-probable-number method for enumeration of Bacillus cereus spores in raw and high-temperature, short-time pasteurized milk.

    Science.gov (United States)

    Harper, Nigel M; Getty, Kelly J K; Schmidt, Karen A; Nutsch, Abbey L; Linton, Richard H

    2011-03-01

    The U.S. Food and Drug Administration's Bacteriological Analytical Manual recommends two enumeration methods for Bacillus cereus: (i) standard plate count method with mannitol-egg yolk-polymyxin (MYP) agar and (ii) a most-probable-number (MPN) method with tryptic soy broth (TSB) supplemented with 0.1% polymyxin sulfate. This study compared the effectiveness of MYP and MPN methods for detecting and enumerating B. cereus in raw and high-temperature, short-time pasteurized skim (0.5%), 2%, and whole (3.5%) bovine milk stored at 4°C for 96 h. Each milk sample was inoculated with B. cereus EZ-Spores and sampled at 0, 48, and 96 h after inoculation. There were no differences (P > 0.05) in B. cereus populations among sampling times for all milk types, so data were pooled to obtain overall mean values for each treatment. The overall B. cereus population mean of pooled sampling times for the MPN method (2.59 log CFU/ml) was greater (P milk samples ranged from 2.36 to 3.46 and 2.66 to 3.58 log CFU/ml for inoculated milk treatments for the MYP plate count and MPN methods, respectively, which is below the level necessary for toxin production. The MPN method recovered more B. cereus, which makes it useful for validation research. However, the MYP plate count method for enumeration of B. cereus also had advantages, including its ease of use and faster time to results (2 versus 5 days for MPN).

  1. Comparative Study of IS6110 Restriction Fragment Length Polymorphism and Variable-Number Tandem-Repeat Typing of Mycobacterium tuberculosis Isolates in the Netherlands, Based on a 5-Year Nationwide Survey

    Science.gov (United States)

    de Beer, Jessica L.; van Ingen, Jakko; de Vries, Gerard; Erkens, Connie; Sebek, Maruschka; Mulder, Arnout; Sloot, Rosa; van den Brandt, Anne-Marie; Enaimi, Mimount; Kremer, Kristin; Supply, Philip

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a detailed evaluation on discriminatory power and agreement with findings in a cluster investigation was performed on 3,975 tuberculosis cases during the period of 2004 to 2008. The level of discrimination of the two typing methods did not differ substantially: RFLP typing yielded 2,733 distinct patterns compared to 2,607 in VNTR typing. The global concordance, defined as isolates labeled unique or identically distributed in clusters by both methods, amounted to 78.5% (n = 3,123). Of the remaining 855 cases, 12% (n = 479) of the cases were clustered only by VNTR, 7.7% (n = 305) only by RFLP typing, and 1.8% (n = 71) revealed different cluster compositions in the two approaches. A cluster investigation was performed for 87% (n = 1,462) of the cases clustered by RFLP. For the 740 cases with confirmed or presumed epidemiological links, 92% were concordant with VNTR typing. In contrast, only 64% of the 722 cases without an epidemiological link but clustered by RFLP typing were also clustered by VNTR typing. We conclude that VNTR typing has a discriminatory power equal to IS6110 RFLP typing but is in better agreement with findings in a cluster investigation performed on an RFLP-clustering-based cluster investigation. Both aspects make VNTR typing a suitable method for tuberculosis surveillance systems. PMID:23363841

  2. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  3. Number Sense on the Number Line

    Science.gov (United States)

    Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni

    2018-01-01

    A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…

  4. The Super Patalan Numbers

    OpenAIRE

    Richardson, Thomas M.

    2014-01-01

    We introduce the super Patalan numbers, a generalization of the super Catalan numbers in the sense of Gessel, and prove a number of properties analagous to those of the super Catalan numbers. The super Patalan numbers generalize the super Catalan numbers similarly to how the Patalan numbers generalize the Catalan numbers.

  5. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  6. [The significance of a large number of health insurance funds and fusions for health services research with statutory health insurance data in Germany - experiences of the lidA study].

    Science.gov (United States)

    March, S; Powietzka, J; Stallmann, C; Swart, E

    2015-02-01

    Since 1970 the health insurance system in Germany has shrunk by more than 90% to 132 statutory health insurance funds (SHI) at present. For studies using data from different SHI, this development means a reduction of contacts and a higher workload when requesting data. The latter is due to the fact that fusions bind resources in the health insurance funds. In order to avoid selection in studies among the insured, all SHI must be contacted. Additionally, 15 controlling institutions on the state and national level have to agree as determined in § 75 of the German Social Code number 10. The lidA study - a German cohort study on work, age and health intends to link primary and secondary data from all SHI of those insured who have given their agreement for participation. Since the beginning of the study in 2009 the number of SHI has been reduced by 70. Of the 6 585 interviews in 2011 approximately half of the interviewees agreed in written form that their individual health insurance data can be linked. This portion of the insured is dispersed among 95 SHI. At this point, 11 contracts with SHI are realised (approximately 50% of the insured) and 8 data controlling authorities have been contacted. The problems involved in the fusion of SHI and its meaning for research are explained in this article. The fusion of SHI makes sense for the long term. It will lead to a reduction of contacts and contracts that researchers have to establish in order to analyse the data. Therefore, this article also discusses the alternative of creating a meta-data set of all the data from the different SHI combined. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Signals of lepton number violation

    CERN Document Server

    Panella, O; Srivastava, Y N

    1999-01-01

    The production of like-sign-dileptons (LSD), in the high energy lepton number violating ( Delta L=+2) reaction, pp to 2jets+l/sup +/l /sup +/, (l=e, mu , tau ), of interest for the experiments to be performed at the forthcoming Large Hadron Collider (LHC), is reported, taking up a composite model scenario in which the exchanged virtual composite neutrino is assumed to be a Majorana particle. Numerical estimates of the corresponding signal cross-section that implement kinematical cuts needed to suppress the standard model background, are presented which show that in some regions of the parameter space the total number of LSD events is well above the background. Assuming non-observation of the LSD signal it is found that LHC would exclude a composite Majorana neutrino up to 700 GeV (if one requires 10 events for discovery). The sensitivity of LHC experiments to the parameter space is then compared to that of the next generation of neutrinoless double beta decay ( beta beta /sub 0 nu /) experiment, GENIUS, and i...

  8. Thanks to CERN's team of surveyors, the Organization's stand at the Night of Science attracted a large number of visitors : the technology and tools used by the surveyors, such as the Terrameter shown here, attracted many visitors to the CERN stand

    CERN Multimedia

    2004-01-01

    Thanks to CERN's team of surveyors, the Organization's stand at the Night of Science attracted a large number of visitors : the technology and tools used by the surveyors, such as the Terrameter shown here, attracted many visitors to the CERN stand

  9. Volumetric modulated arc therapy versus step-and-shoot intensity modulated radiation therapy in the treatment of large nerve perineural spread to the skull base: a comparative dosimetric planning study

    Energy Technology Data Exchange (ETDEWEB)

    Gorayski, Peter; Fitzgerald, Rhys; Barry, Tamara [Department of Radiation Oncology, Princess Alexandra Hospital, Woolloongabba, Queensland (Australia); Burmeister, Elizabeth [Nursing Practice Development Unit, Princess Alexandra Hospital and Research Centre for Clinical and Community Practice Innovation, Griffith University, Brisbane, Queensland (Australia); Foote, Matthew [Department of Radiation Oncology, Princess Alexandra Hospital, Woolloongabba, Queensland (Australia); Diamantina Institute, University of Queensland, Brisbane, Queensland (Australia)

    2014-06-15

    Cutaneous squamous cell carcinoma with large nerve perineural (LNPN) infiltration of the base of skull is a radiotherapeutic challenge given the complex target volumes to nearby organs at risk (OAR). A comparative planning study was undertaken to evaluate dosimetric differences between volumetric modulated arc therapy (VMAT) versus intensity modulated radiation therapy (IMRT) in the treatment of LNPN. Five consecutive patients previously treated with IMRT for LNPN were selected. VMAT plans were generated for each case using the same planning target volumes (PTV), dose prescriptions and OAR constraints as IMRT. Comparative parameters used to assess target volume coverage, conformity and homogeneity included V95 of the PTV (volume encompassed by the 95% isodose), conformity index (CI) and homogeneity index (HI). In addition, OAR maximum point doses, V20, V30, non-target tissue (NTT) point max doses, NTT volume above reference dose, monitor units (MU) were compared. IMRT and VMAT plans generated were comparable for CI (P = 0.12) and HI (P = 0.89). VMAT plans achieved better V95 (P = < 0.001) and reduced V20 and V30 by 652 cubic centimetres (cc) (28.5%) and 425.7 cc (29.1%), respectively. VMAT increased MU delivered by 18% without a corresponding increase in NTT dose. Compared with IMRT plans for LNPN, VMAT achieved comparable HI and CI.

  10. Comparative Study of IS6110 Restriction Fragment Length Polymorphism and Variable-Number Tandem-Repeat Typing of Mycobacterium tuberculosis Isolates in the Netherlands, Based on a 5-Year Nationwide Survey

    NARCIS (Netherlands)

    Beer, J.L. de; Ingen, J. van; Vries, G. de; Erkens, C.; Sebek, M.; Mulder, A.; Sloot, R.; Brandt, A.M. van den; Enaimi, M.; Kremer, K.; Supply, P.; Soolingen, D. van

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a

  11. Comparative study of IS6110 restriction fragment length polymorphism and variable-number tandem-repeat typing of Mycobacterium tuberculosis isolates in the Netherlands, based on a 5-year nationwide survey

    NARCIS (Netherlands)

    de Beer, Jessica L.; van Ingen, Jakko; de Vries, Gerard; Erkens, Connie; Sebek, Maruschka; Mulder, Arnout; Sloot, Rosa; van den Brandt, Anne-Marie; Enaimi, Mimount; Kremer, Kristin; Supply, Philip; van Soolingen, Dick

    2013-01-01

    In order to switch from IS6110 and polymorphic GC-rich repetitive sequence (PGRS) restriction fragment length polymorphism (RFLP) to 24-locus variable-number tandem-repeat (VNTR) typing of Mycobacterium tuberculosis complex isolates in the national tuberculosis control program in The Netherlands, a

  12. Number-unconstrained quantum sensing

    Science.gov (United States)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  13. Measurement of lithium ion transference numbers of electrolytes for lithium-ion batteries. A comparative study with five various methods.; Messung von Lithium-Ionen Ueberfuehrungszahlen an Elektrolyten fuer Lithium-Ionen Batterien. Eine vergleichende Studie mit fuenf verschiedenen Methoden

    Energy Technology Data Exchange (ETDEWEB)

    Zugmann, Sandra

    2011-03-30

    Transference numbers are decisive transport properties to characterize electrolytes. They state the fraction of a certain species at charge transport and are defined by the ratio of current Ii that is transported by the ionic species i to the total current I. They are very important for lithium-ion batteries, because they give information about the real lithium transport and the efficiency of the battery. If the transference number has a too small value, for example, the lithium cannot be ''delivered'' fast enough in the discharge process. This can lead to precipitation of the salt at the anode and to depletion of the electrolyte at the cathode. Currently only a few adequate measurement methods for non-aqueous lithium electrolytes exist. The aim of this work was the installation of measurement devices and the comparison of different methods of transference numbers for electrolytes in lithium-ion batteries. The advantages and disadvantages for every method should be analyzed and transference numbers of new electrolyte be measured. In this work a detailed comparison of different methods with electrochemical and spectroscopic factors was presented for the first time. The galvanostatic polarization, the potentiostatic polarization, the emf method, the determination by NMR and the determination by conductivity measurements were tested for their practical application and used for different lithium salts in several solvents. The results show clearly that the assumptions made for every method affect the measured transference number a lot. They can have different values depending on the used method and the concentration dependence can even have contrary tendencies for methods with electrochemical or spectroscopic aspects. The influence of ion pairs is the determining factor at the measurements. For a full characterization of electrolytes a complete set of transport parameters is necessary, including diffusion coefficients, conductivity, transference number and ideally

  14. Measurement of lithium ion transference numbers of electrolytes for lithium-ion batteries. A comparative study with five various methods.; Messung von Lithium-Ionen Ueberfuehrungszahlen an Elektrolyten fuer Lithium-Ionen Batterien. Eine vergleichende Studie mit fuenf verschiedenen Methoden

    Energy Technology Data Exchange (ETDEWEB)

    Zugmann, Sandra

    2011-03-30

    Transference numbers are decisive transport properties to characterize electrolytes. They state the fraction of a certain species at charge transport and are defined by the ratio of current Ii that is transported by the ionic species i to the total current I. They are very important for lithium-ion batteries, because they give information about the real lithium transport and the efficiency of the battery. If the transference number has a too small value, for example, the lithium cannot be ''delivered'' fast enough in the discharge process. This can lead to precipitation of the salt at the anode and to depletion of the electrolyte at the cathode. Currently only a few adequate measurement methods for non-aqueous lithium electrolytes exist. The aim of this work was the installation of measurement devices and the comparison of different methods of transference numbers for electrolytes in lithium-ion batteries. The advantages and disadvantages for every method should be analyzed and transference numbers of new electrolyte be measured. In this work a detailed comparison of different methods with electrochemical and spectroscopic factors was presented for the first time. The galvanostatic polarization, the potentiostatic polarization, the emf method, the determination by NMR and the determination by conductivity measurements were tested for their practical application and used for different lithium salts in several solvents. The results show clearly that the assumptions made for every method affect the measured transference number a lot. They can have different values depending on the used method and the concentration dependence can even have contrary tendencies for methods with electrochemical or spectroscopic aspects. The influence of ion pairs is the determining factor at the measurements. For a full characterization of electrolytes a complete set of transport parameters is necessary, including diffusion coefficients, conductivity, transference

  15. Large transverse momentum phenomena

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1977-09-01

    It is pointed out that it is particularly significant that the quantum numbers of the leading particles are strongly correlated with the quantum numbers of the incident hadrons indicating that the valence quarks themselves are transferred to large p/sub t/. The crucial question is how they get there. Various hadron reactions are discussed covering the structure of exclusive reactions, inclusive reactions, normalization of inclusive cross sections, charge correlations, and jet production at large transverse momentum. 46 references

  16. Number words and number symbols a cultural history of numbers

    CERN Document Server

    Menninger, Karl

    1992-01-01

    Classic study discusses number sequence and language and explores written numerals and computations in many cultures. "The historian of mathematics will find much to interest him here both in the contents and viewpoint, while the casual reader is likely to be intrigued by the author's superior narrative ability.

  17. Malaria diagnosis from pooled blood samples: comparative analysis of real-time PCR, nested PCR and immunoassay as a platform for the molecular and serological diagnosis of malaria on a large-scale

    Directory of Open Access Journals (Sweden)

    Giselle FMC Lima

    2011-09-01

    Full Text Available Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR, nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.

  18. Visuospatial Priming of the Mental Number Line

    Science.gov (United States)

    Stoianov, Ivilin; Kramer, Peter; Umilta, Carlo; Zorzi, Marco

    2008-01-01

    It has been argued that numbers are spatially organized along a "mental number line" that facilitates left-hand responses to small numbers, and right-hand responses to large numbers. We hypothesized that whenever the representations of visual and numerical space are concurrently activated, interactions can occur between them, before response…

  19. teaching multiplication of large positive whole numbers using ...

    African Journals Online (AJOL)

    KEY WORDS: Grating Method, History of Mathematics, Long Multiplication. ... The Wolfram mathworld (n.d.) opined that the ... A further simple random sampling was carried out to select an intact class of 40 students from each of the sampled ...

  20. Files synchronization from a large number of insertions and deletions

    Science.gov (United States)

    Ellappan, Vijayan; Kumari, Savera

    2017-11-01

    Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.

  1. Large numbers hypothesis. IV - The cosmological constant and quantum physics

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    In standard physics quantum field theory is based on a flat vacuum space-time. This quantum field theory predicts a nonzero cosmological constant. Hence the gravitational field equations do not admit a flat vacuum space-time. This dilemma is resolved using the units covariant gravitational field equations. This paper shows that the field equations admit a flat vacuum space-time with nonzero cosmological constant if and only if the canonical LNH is valid. This allows an interpretation of the LNH phenomena in terms of a time-dependent vacuum state. If this is correct then the cosmological constant must be positive.

  2. The large Reynolds number - Asymptotic theory of turbulent boundary layers.

    Science.gov (United States)

    Mellor, G. L.

    1972-01-01

    A self-consistent, asymptotic expansion of the one-point, mean turbulent equations of motion is obtained. Results such as the velocity defect law and the law of the wall evolve in a relatively rigorous manner, and a systematic ordering of the mean velocity boundary layer equations and their interaction with the main stream flow are obtained. The analysis is extended to the turbulent energy equation and to a treatment of the small scale equilibrium range of Kolmogoroff; in velocity correlation space the two-thirds power law is obtained. Thus, the two well-known 'laws' of turbulent flow are imbedded in an analysis which provides a great deal of other information.

  3. Rabi-vibronic resonance with large number of vibrational quanta

    OpenAIRE

    Glenn, R.; Raikh, M. E.

    2011-01-01

    We study theoretically the Rabi oscillations of a resonantly driven two-level system linearly coupled to a harmonic oscillator (vibrational mode) with frequency, \\omega_0. We show that for weak coupling, \\omega_p \\ll \\omega_0, where \\omega_p is the polaronic shift, Rabi oscillations are strongly modified in the vicinity of the Rabi-vibronic resonance \\Omega_R = \\omega_0, where \\Omega_R is the Rabi frequency. The width of the resonance is (\\Omega_R-\\omega_0) \\sim \\omega_p^{2/3} \\omega_0^{1/3} ...

  4. Our prescription drugs kill us in large numbers

    DEFF Research Database (Denmark)

    Gøtzsche, Peter C

    2014-01-01

    Our prescription drugs are the third leading cause of death after heart disease and cancer in the United States and Europe. Around half of those who die have taken their drugs correctly; the other half die because of errors, such as too high a dose or use of a drug despite contraindications. Our...

  5. Diamond Fuzzy Number

    Directory of Open Access Journals (Sweden)

    T. Pathinathan

    2015-01-01

    Full Text Available In this paper we define diamond fuzzy number with the help of triangular fuzzy number. We include basic arithmetic operations like addition, subtraction of diamond fuzzy numbers with examples. We define diamond fuzzy matrix with some matrix properties. We have defined Nested diamond fuzzy number and Linked diamond fuzzy number. We have further classified Right Linked Diamond Fuzzy number and Left Linked Diamond Fuzzy number. Finally we have verified the arithmetic operations for the above mentioned types of Diamond Fuzzy Numbers.

  6. From Calculus to Number Theory

    Indian Academy of Sciences (India)

    A. Raghuram

    2016-11-04

    Nov 4, 2016 ... diverges to infinity. This means given any number M, however large, we can add sufficiently many terms in the above series to make the sum larger than M. This was first proved by Nicole Oresme (1323-1382), a brilliant. French philosopher of his times.

  7. Building Numbers from Primes

    Science.gov (United States)

    Burkhart, Jerry

    2009-01-01

    Prime numbers are often described as the "building blocks" of natural numbers. This article shows how the author and his students took this idea literally by using prime factorizations to build numbers with blocks. In this activity, students explore many concepts of number theory, including the relationship between greatest common factors and…

  8. Introduction to number theory

    CERN Document Server

    Vazzana, Anthony; Garth, David

    2007-01-01

    One of the oldest branches of mathematics, number theory is a vast field devoted to studying the properties of whole numbers. Offering a flexible format for a one- or two-semester course, Introduction to Number Theory uses worked examples, numerous exercises, and two popular software packages to describe a diverse array of number theory topics.

  9. Transitional boundary layer in low-Prandtl-number convection at high Rayleigh number

    Science.gov (United States)

    Schumacher, Joerg; Bandaru, Vinodh; Pandey, Ambrish; Scheel, Janet

    2016-11-01

    The boundary layer structure of the velocity and temperature fields in turbulent Rayleigh-Bénard flows in closed cylindrical cells of unit aspect ratio is revisited from a transitional and turbulent viscous boundary layer perspective. When the Rayleigh number is large enough the boundary layer dynamics at the bottom and top plates can be separated into an impact region of downwelling plumes, an ejection region of upwelling plumes and an interior region (away from side walls) that is dominated by a shear flow of varying orientation. This interior plate region is compared here to classical wall-bounded shear flows. The working fluid is liquid mercury or liquid gallium at a Prandtl number of Pr = 0 . 021 for a range of Rayleigh numbers of 3 ×105 Deutsche Forschungsgemeinschaft.

  10. The natural number bias and its role in rational number understanding in children with dyscalculia. Delay or deficit?

    Science.gov (United States)

    Van Hoof, Jo; Verschaffel, Lieven; Ghesquière, Pol; Van Dooren, Wim

    2017-12-01

    Previous research indicated that in several cases learners' errors on rational number tasks can be attributed to learners' tendency to (wrongly) apply natural number properties. There exists a large body of literature both on learners' struggle with understanding the rational number system and on the role of the natural number bias in this struggle. However, little is known about this phenomenon in learners with dyscalculia. We investigated the rational number understanding of learners with dyscalculia and compared it with the rational number understanding of learners without dyscalculia. Three groups of learners were included: sixth graders with dyscalculia, a chronological age match group, and an ability match group. The results showed that the rational number understanding of learners with dyscalculia is significantly lower than that of typically developing peers, but not significantly different from younger learners, even after statistically controlling for mathematics achievement. Next to a delay in their mathematics achievement, learners with dyscalculia seem to have an extra delay in their rational number understanding, compared with peers. This is especially the case in those rational number tasks where one has to inhibit natural number knowledge to come to the right answer. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Long-term results of interventional treatment of large unresectable hepatocellular carcinoma (HCC): significant survival benefit from combined transcatheter arterial chemoembolization (TACE) and percutaneous ethanol injection (PEI) compared to TACE monotherapy

    International Nuclear Information System (INIS)

    Lubienski, A.; Bitsch, R.G.; Grenacher, L.; Kauffmann, G.W.; Schemmer, P.; Duex, M.

    2004-01-01

    Purpose: A retrospective analysis of long-term efficacy of combined transcatheter arterial chemoembolization (TACE) and percutaneous ethanol injection (PEI) and TACE monotherapy was conducted in patients with large, non-resectable hepatocellular carcinoma (HCC). Methods and Materials: Fifty patients with large, unresectable HCC lesions underwent selective TACE. Liver cirrhosis was present in 42 patients, due to alcohol abuse (n = 22) and viral infection (n = 17). In three patients, the underlying cause for liver cirrhosis remained unclear. Child A cirrhosis was found in 22 and Child B cirrhosis in 20 patients. Repeated and combined TACE and PEI were performed in 22 patients and repeated TACE monotherapy was performed in 28 patients. Survival and complication rates were determined and compared. Results: The 6-, 12-, 24- and 36-month survival rates were 61%, 21%, 4%, and 4% for TACE monotherapy and 77%, 55%, 39% and 22% for combined TACE and PEI (Kaplan-Meier method). The kind of treatment significantly affected the survival rate (p=0.002 log-rank test). Severe side effects were present in two patients of the monotherapy group and in three patients of the combination therapy group. (orig.)

  12. On the number of special numbers

    Indian Academy of Sciences (India)

    without loss of any generality to be the first k primes), then the equation a + b = c has .... This is an elementary exercise in partial summation (see [12]). Thus ... This is easily done by inserting a stronger form of the prime number theorem into the.

  13. Algorithm for counting large directed loops

    Energy Technology Data Exchange (ETDEWEB)

    Bianconi, Ginestra [Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34014 Trieste (Italy); Gulbahce, Natali [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, NM 87545 (United States)

    2008-06-06

    We derive a Belief-Propagation algorithm for counting large loops in a directed network. We evaluate the distribution of the number of small loops in a directed random network with given degree sequence. We apply the algorithm to a few characteristic directed networks of various network sizes and loop structures and compare the algorithm with exhaustive counting results when possible. The algorithm is adequate in estimating loop counts for large directed networks and can be used to compare the loop structure of directed networks and their randomized counterparts.

  14. Classical theory of algebraic numbers

    CERN Document Server

    Ribenboim, Paulo

    2001-01-01

    Gauss created the theory of binary quadratic forms in "Disquisitiones Arithmeticae" and Kummer invented ideals and the theory of cyclotomic fields in his attempt to prove Fermat's Last Theorem These were the starting points for the theory of algebraic numbers, developed in the classical papers of Dedekind, Dirichlet, Eisenstein, Hermite and many others This theory, enriched with more recent contributions, is of basic importance in the study of diophantine equations and arithmetic algebraic geometry, including methods in cryptography This book has a clear and thorough exposition of the classical theory of algebraic numbers, and contains a large number of exercises as well as worked out numerical examples The Introduction is a recapitulation of results about principal ideal domains, unique factorization domains and commutative fields Part One is devoted to residue classes and quadratic residues In Part Two one finds the study of algebraic integers, ideals, units, class numbers, the theory of decomposition, iner...

  15. It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers

    Science.gov (United States)

    Goodwin, Chris; Ortiz, Enrique

    2015-01-01

    Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…

  16. Reappraisal of Epstein-Barr virus (EBV) in diffuse large B-cell lymphoma (DLBCL): comparative analysis between EBV-positive and EBV-negative DLBCL with EBV-positive bystander cells.

    Science.gov (United States)

    Ohashi, Akiko; Kato, Seiichi; Okamoto, Akinao; Inaguma, Yoko; Satou, Akira; Tsuzuki, Toyonori; Emi, Nobuhiko; Okamoto, Masataka; Nakamura, Shigeo

    2017-07-01

    Epstein-Barr virus (EBV)-positive diffuse large B-cell lymphoma (DLBCL) not otherwise specified is defined as monoclonal EBV+ B-cell proliferation affecting patients without any known immunosuppression. Non-neoplastic EBV+ cells proliferating in or adjacent to EBV- DLBCL were reported recently, but their clinical significance is unclear. Thus, the aim of this study was to investigate the prognostic impact of EBV+ cells in DLBCL. We compared the clinicopathological characteristics of 30 EBV+ DLBCL patients and 29 and 604 EBV- DLBCL patients with and without EBV+ bystander cells (median age of onset 71, 67 and 62 years, respectively). Both EBV+ DLBCL patients and EBV- DLBCL patients with EBV+ bystander cells tended to have high and high-intermediate International Prognostic Index scores (60% and 59%, respectively), as compared with only 46% of EBV- DLBCL patients without EBV+ bystander cells. EBV- DLBCL patients with EBV+ bystander cells showed a significantly higher incidence of lung involvement than those without EBV+ bystander cells (10% versus 2%, P bystander cells had a poorer prognosis than patients without any detectable EBV+ cells [median overall survival (OS) of 100 months and 40 months versus not reached, P bystander cells treated with rituximab showed overlapping survival curves (OS, P = 0.77; progression-free survival, P = 1.0). EBV- DLBCL with bystander EBV+ cells has similar clinical characteristics to EBV+ DLBCL. DLBCL with EBV+ bystander cells may be related to both age-related and microenvironment-related immunological deterioration. © 2017 John Wiley & Sons Ltd.

  17. On the number of special numbers

    Indian Academy of Sciences (India)

    We now apply the theory of the Thue equation to obtain an effective bound on m. Indeed, by Lemma 3.2, we can write m2 = ba3 and m2 − 4 = cd3 with b, c cubefree. By the above, both b, c are bounded since they are cubefree and all their prime factors are less than e63727. Now we have a finite number of. Thue equations:.

  18. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  19. Robotic Assisted Simple Prostatectomy versus Holmium Laser Enucleation of the Prostate for Lower Urinary Tract Symptoms in Patients with Large Volume Prostate: A Comparative Analysis from a High Volume Center.

    Science.gov (United States)

    Umari, Paolo; Fossati, Nicola; Gandaglia, Giorgio; Pokorny, Morgan; De Groote, Ruben; Geurts, Nicolas; Goossens, Marijn; Schatterman, Peter; De Naeyer, Geert; Mottrie, Alexandre

    2017-04-01

    We report a comparative analysis of robotic assisted simple prostatectomy vs holmium laser enucleation of the prostate in patients who had benign prostatic hyperplasia with a large volume prostate (greater than 100 ml). A total of 81 patients underwent robotic assisted simple prostatectomy and 45 underwent holmium laser enucleation of the prostate in a 7-year period. Patients were preoperatively assessed with transrectal ultrasound and uroflowmetry. Functional parameters were assessed postoperatively during followup. Perioperative outcomes included operative time, postoperative hemoglobin, catheterization time and hospitalization. Complications were reported according to the Clavien-Dindo classification. Compared to the holmium laser enucleation group, patients treated with prostatectomy were significantly younger (median age 69 vs 74 years, p = 0.032) and less healthy (Charlson comorbidity index 2 or greater in 62% vs 29%, p = 0.0003), and had a lower rate of suprapubic catheterization (23% vs 42%, p = 0.028) and a higher preoperative I-PSS (International Prostate Symptom Score) (25 vs 21, p = 0.049). Both groups showed an improvement in the maximum flow rate (15 vs 11 ml per second, p = 0.7), and a significant reduction in post-void residual urine (-73 vs -100 ml, p = 0.4) and I-PSS (-20 vs -18, p = 0.8). Median operative time (105 vs 105 minutes, p = 0.9) and postoperative hemoglobin (13.2 vs 13.8 gm/dl, p = 0.08) were similar for robotic assisted prostatectomy and holmium laser enucleation, respectively. Median catheterization time (3 vs 2 days, p = 0.005) and median hospitalization (4 vs 2 days, p = 0.0001) were slightly shorter in the holmium laser group. Complication rates were similar with no Clavien grade greater than 3 in either group. Our results from a single center suggest comparable outcomes for robotic assisted simple prostatectomy and holmium laser enucleation of the prostate in patients with a large volume prostate. These findings require

  20. Number projection method

    International Nuclear Information System (INIS)

    Kaneko, K.

    1987-01-01

    A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states