Large number discrimination by mosquitofish.
Directory of Open Access Journals (Sweden)
Christian Agrillo
Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all
Large number discrimination in newborn fish.
Directory of Open Access Journals (Sweden)
Laura Piffer
Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.
Forecasting distribution of numbers of large fires
Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan
2015-01-01
Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...
Thermal convection for large Prandtl numbers
Grossmann, Siegfried; Lohse, Detlef
2001-01-01
The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer
Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions
Stout, John Eldon
Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of
Large numbers hypothesis. II - Electromagnetic radiation
Adams, P. J.
1983-01-01
This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.
Forecasting distribution of numbers of large fires
Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.
2014-01-01
Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.
Modified large number theory with constant G
International Nuclear Information System (INIS)
Recami, E.
1983-01-01
The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic
New feature for an old large number
International Nuclear Information System (INIS)
Novello, M.; Oliveira, L.R.A.
1986-01-01
A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt
48 CFR 1604.7201 - FEHB Program Large Provider Agreements.
2010-10-01
... FEDERAL EMPLOYEES HEALTH BENEFITS ACQUISITION REGULATION GENERAL ADMINISTRATIVE MATTERS Large Provider... into any Large Provider Agreement; and (ii) Not less than 60 days before exercising renewals or other...
Factors Affecting Number of Diabetes Management Activities Provided by Pharmacists.
Lo, Annie; Lorenz, Kathleen; Cor, Ken; Simpson, Scot H
2016-12-01
Legislative changes since 2007 have given Alberta pharmacists additional authorizations and new practice settings, which should enhance provision of clinical services to patients. This study examined whether these changes are related to the number of diabetes management activities provided by pharmacists. Cross-sectional surveys of Alberta pharmacists were conducted in 2006 and 2015. Both questionnaires contained 63 diabetes management activities, with response options to indicate how frequently the activity was provided. Respondents were grouped by survey year, practice setting, diabetes-specific training and additional authorizations. The number of diabetes management activities provided often or always were compared among groups by using analysis of variance. Data from 128 pharmacists participating in the 2006 survey were compared with 256 pharmacists participating in the 2015 survey; overall mean age was 41.6 (±10.9) years, 245 (64%) were women, mean duration of practice was 16.1 (±11.8) years, 280 (73%) were community pharmacists, 75 (20%) were certified diabetes educators (CDEs), and 100 (26%) had additional prescribing authorization (APA). Pharmacists provided a mean of 28.7 (95% CI 26.3 to 31.2) diabetes management activities in 2006 and 35.2 (95% CI 33.4-37.0) activities in 2015 (p<0.001). Pharmacists who were CDEs provided significantly more activities compared to other pharmacists (p<0.001). In 2015, working in a primary care network and having APA were also associated with provision of more activities (p<0.05 for both comparisons). Pharmacists provided more diabetes management activities in 2015 than in 2006. The number of diabetes management activities was also associated with being a CDE, working in a primary care network or having APA. Copyright Â© 2016 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.
Service Provider Revenue Dependence of Offered Number of Service Classes
Directory of Open Access Journals (Sweden)
V. S. Aćimović-Raspopović
2011-06-01
Full Text Available In this paper possible applications of responsive pricing scheme and Stackelberg game for pricing telecommunication services with service provider as a leader and users acting as followers are analyzed. We have classified users according to an elasticity criterion into inelastic, partially elastic and elastic users. Their preferences are modelled through utility functions, which describe users’ sensitivity to changes in the quality of service and price. In the proposed algorithm a bandwidth management server is responsible for performing automatic optimal bandwidth allocation to each user’s session while maximizing its expected utility and the overall service provider’s revenue. The pricing algorithm is used for congestion control and more efficient network capacity utilization. We have analyzed different scenarios of the proposed usage-based pricing algorithm. Particularly, the influence of the number of service classes on price setting in terms of service provider’s revenue and total users’ utility maximization are discussed. The model is verified through numerous simulations performed by software that we have developed for that purpose.
48 CFR 1652.204-74 - Large provider agreements.
2010-10-01
... FEDERAL EMPLOYEES HEALTH BENEFITS ACQUISITION REGULATION CLAUSES AND FORMS CONTRACT CLAUSES Texts of FEHBP... Large Provider Agreement; and (ii) Not less than 60 days before exercising a renewal or other option, or... exercising a simple renewal or other option contemplated by a Large Provider Agreement that OPM previously...
Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers
Balasubramaniam, R.; Subramanian, R. S.
1996-01-01
The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.
On a strong law of large numbers for monotone measures
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.
2013-01-01
Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf
A Chain Perspective on Large-scale Number Systems
Grijpink, J.H.A.M.
2012-01-01
As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena
Administrative and clinical denials by a large dental insurance provider
Directory of Open Access Journals (Sweden)
Geraldo Elias MIRANDA
2015-01-01
Full Text Available The objective of this study was to assess the prevalence and the type of claim denials (administrative, clinical or both made by a large dental insurance plan. This was a cross-sectional, observational study, which retrospectively collected data from the claims and denial reports of a dental insurance company. The sample consisted of the payment claims submitted by network dentists, based on their procedure reports, reviewed in the third trimester of 2012. The denials were classified and grouped into ‘administrative’, ‘clinical’ or ‘both’. The data were tabulated and submitted to uni- and bivariate analyses. The confidence intervals were 95% and the level of significance was set at 5%. The overall frequency of denials was 8.2% of the total number of procedures performed. The frequency of administrative denials was 72.88%, whereas that of technical denials was 25.95% and that of both, 1.17% (p < 0.05. It was concluded that the overall prevalence of denials in the studied sample was low. Administrative denials were the most prevalent. This type of denial could be reduced if all dental insurance providers had unified clinical and administrative protocols, and if dentists submitted all of the required documentation in accordance with these protocols.
The large numbers hypothesis and a relativistic theory of gravitation
International Nuclear Information System (INIS)
Lau, Y.K.; Prokhovnik, S.J.
1986-01-01
A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated
Fatal crashes involving large numbers of vehicles and weather.
Wang, Ying; Liang, Liming; Evans, Leonard
2017-12-01
Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
On Independence for Capacities with Law of Large Numbers
Huang, Weihuan
2017-01-01
This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.
Teaching Multiplication of Large Positive Whole Numbers Using ...
African Journals Online (AJOL)
This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...
Lovelock inflation and the number of large dimensions
Ferrer, Francesc
2007-01-01
We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.
A large number of stepping motor network construction by PLC
Mei, Lin; Zhang, Kai; Hongqiang, Guo
2017-11-01
In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.
Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers
Govardhan, RN; Arakeri, JH
2011-01-01
Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...
Rotating thermal convection at very large Rayleigh numbers
Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard
2016-11-01
The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.
Lepton number violation in theories with a large number of standard model copies
International Nuclear Information System (INIS)
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-01-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.
Improving CASINO performance for models with large number of electrons
International Nuclear Information System (INIS)
Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.
2009-01-01
Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation
[Dual process in large number estimation under uncertainty].
Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento
2016-08-01
According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.
The large number hypothesis and Einstein's theory of gravitation
International Nuclear Information System (INIS)
Yun-Kau Lau
1985-01-01
In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Quasi-isodynamic configuration with large number of periods
International Nuclear Information System (INIS)
Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.
2005-01-01
It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)
Automatic trajectory measurement of large numbers of crowded objects
Li, Hui; Liu, Ye; Chen, Yan Qiu
2013-06-01
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.
Gentile statistics with a large maximum occupation number
International Nuclear Information System (INIS)
Dai Wusheng; Xie Mi
2004-01-01
In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics
The large numbers hypothesis and the Einstein theory of gravitation
International Nuclear Information System (INIS)
Dirac, P.A.M.
1979-01-01
A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)
A Characterization of Hypergraphs with Large Domination Number
Directory of Open Access Journals (Sweden)
Henning Michael A.
2016-05-01
Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.
Chaotic scattering: the supersymmetry method for large number of channels
International Nuclear Information System (INIS)
Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.
1995-01-01
We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))
Chaotic scattering: the supersymmetry method for large number of channels
Energy Technology Data Exchange (ETDEWEB)
Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)
1995-01-23
We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))
Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers
Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel
2010-01-01
The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…
Particle creation and Dirac's large number hypothesis; and Reply
International Nuclear Information System (INIS)
Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.
1976-01-01
The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)
A modified large number theory with constant G
Recami, Erasmo
1983-03-01
The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.
The large lungs of elite swimmers: an increased alveolar number?
Armour, J; Donnelly, P M; Bye, P T
1993-02-01
In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or
A NICE approach to managing large numbers of desktop PC's
International Nuclear Information System (INIS)
Foster, David
1996-01-01
The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)
The Ramsey numbers of large cycles versus small wheels
Surahmat,; Baskoro, E.T.; Broersma, H.J.
2004-01-01
For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4
Turbulent flows at very large Reynolds numbers: new lessons learned
International Nuclear Information System (INIS)
Barenblatt, G I; Prostokishin, V M; Chorin, A J
2014-01-01
The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)
The large Reynolds number - Asymptotic theory of turbulent boundary layers.
Mellor, G. L.
1972-01-01
A self-consistent, asymptotic expansion of the one-point, mean turbulent equations of motion is obtained. Results such as the velocity defect law and the law of the wall evolve in a relatively rigorous manner, and a systematic ordering of the mean velocity boundary layer equations and their interaction with the main stream flow are obtained. The analysis is extended to the turbulent energy equation and to a treatment of the small scale equilibrium range of Kolmogoroff; in velocity correlation space the two-thirds power law is obtained. Thus, the two well-known 'laws' of turbulent flow are imbedded in an analysis which provides a great deal of other information.
Gandiwa, E.
2013-01-01
Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until
Files synchronization from a large number of insertions and deletions
Ellappan, Vijayan; Kumari, Savera
2017-11-01
Synchronization between different versions of files is becoming a major issue that most of the applications are facing. To make the applications more efficient a economical algorithm is developed from the previously used algorithm of “File Loading Algorithm”. I am extending this algorithm in three ways: First, dealing with non-binary files, Second backup is generated for uploaded files and lastly each files are synchronized with insertions and deletions. User can reconstruct file from the former file with minimizing the error and also provides interactive communication by eliminating the frequency without any disturbance. The drawback of previous system is overcome by using synchronization, in which multiple copies of each file/record is created and stored in backup database and is efficiently restored in case of any unwanted deletion or loss of data. That is, to introduce a protocol that user B may use to reconstruct file X from file Y with suitably low probability of error. Synchronization algorithms find numerous areas of use, including data storage, file sharing, source code control systems, and cloud applications. For example, cloud storage services such as Drop box synchronize between local copies and cloud backups each time users make changes to local versions. Similarly, synchronization tools are necessary in mobile devices. Specialized synchronization algorithms are used for video and sound editing. Synchronization tools are also capable of performing data duplication.
Klewicki, J C; Chini, G P; Gibson, J F
2017-03-13
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-01-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585
Providing cell phone numbers and email addresses to Patients: the physician's perspective
2011-01-01
Background The provision of cell phone numbers and email addresses enhances the accessibility of medical consultations, but can add to the burden of physicians' routine clinical practice and affect their free time. The objective was to assess the attitudes of physicians to providing their telephone number or email address to patients. Methods Primary care physicians in the southern region of Israel completed a structured questionnaire that related to the study objective. Results The study population included 120 primary care physicians with a mean age of 41.2 ± 8.5, 88 of them women (73.3%). Physicians preferred to provide their cell phone number rather than their email address (P = 0.0007). They preferred to answer their cell phones only during the daytime and at predetermined times, but would answer email most hours of the day, including weekends and holidays (P = 0.001). More physicians (79.7%) would have preferred allotted time for email communication than allotted time for cell phone communication (50%). However, they felt that email communication was more likely to lead to miscommunication than telephone calls (P = 0.0001). There were no differences between male and female physicians on the provision of cell phone numbers or email addresses to patients. Older physicians were more prepared to provide cell phone numbers that younger ones (P = 0.039). Conclusions The attitude of participating physicians was to provide their cell phone number or email address to some of their patients, but most of them preferred to give out their cell phone number. PMID:21426591
Providing cell phone numbers and email addresses to Patients: the physician's perspective
Directory of Open Access Journals (Sweden)
Freud Tamar
2011-03-01
Full Text Available Abstract Background The provision of cell phone numbers and email addresses enhances the accessibility of medical consultations, but can add to the burden of physicians' routine clinical practice and affect their free time. The objective was to assess the attitudes of physicians to providing their telephone number or email address to patients. Methods Primary care physicians in the southern region of Israel completed a structured questionnaire that related to the study objective. Results The study population included 120 primary care physicians with a mean age of 41.2 ± 8.5, 88 of them women (73.3%. Physicians preferred to provide their cell phone number rather than their email address (P = 0.0007. They preferred to answer their cell phones only during the daytime and at predetermined times, but would answer email most hours of the day, including weekends and holidays (P = 0.001. More physicians (79.7% would have preferred allotted time for email communication than allotted time for cell phone communication (50%. However, they felt that email communication was more likely to lead to miscommunication than telephone calls (P = 0.0001. There were no differences between male and female physicians on the provision of cell phone numbers or email addresses to patients. Older physicians were more prepared to provide cell phone numbers that younger ones (P = 0.039. Conclusions The attitude of participating physicians was to provide their cell phone number or email address to some of their patients, but most of them preferred to give out their cell phone number.
Ikkai, Akiko; McCollough, Andrew W; Vogel, Edward K
2010-04-01
Visual working memory (VWM) helps to temporarily represent information from the visual environment and is severely limited in capacity. Recent work has linked various forms of neural activity to the ongoing representations in VWM. One piece of evidence comes from human event-related potential studies, which find a sustained contralateral negativity during the retention period of VWM tasks. This contralateral delay activity (CDA) has previously been shown to increase in amplitude as the number of memory items increases, up to the individual's working memory capacity limit. However, significant alternative hypotheses remain regarding the true nature of this activity. Here we test whether the CDA is modulated by the perceptual requirements of the memory items as well as whether it is determined by the number of locations that are being attended within the display. Our results provide evidence against these two alternative accounts and instead strongly support the interpretation that this activity reflects the current number of objects that are being represented in VWM.
Directory of Open Access Journals (Sweden)
Marion Hoehn
Full Text Available The effective population size (N(e is proportional to the loss of genetic diversity and the rate of inbreeding, and its accurate estimation is crucial for the monitoring of small populations. Here, we integrate temporal studies of the gecko Oedura reticulata, to compare genetic and demographic estimators of N(e. Because geckos have overlapping generations, our goal was to demographically estimate N(bI, the inbreeding effective number of breeders and to calculate the N(bI/N(a ratio (N(a =number of adults for four populations. Demographically estimated N(bI ranged from 1 to 65 individuals. The mean reduction in the effective number of breeders relative to census size (N(bI/N(a was 0.1 to 1.1. We identified the variance in reproductive success as the most important variable contributing to reduction of this ratio. We used four methods to estimate the genetic based inbreeding effective number of breeders N(bI(gen and the variance effective populations size N(eV(gen estimates from the genotype data. Two of these methods - a temporal moment-based (MBT and a likelihood-based approach (TM3 require at least two samples in time, while the other two were single-sample estimators - the linkage disequilibrium method with bias correction LDNe and the program ONeSAMP. The genetic based estimates were fairly similar across methods and also similar to the demographic estimates excluding those estimates, in which upper confidence interval boundaries were uninformative. For example, LDNe and ONeSAMP estimates ranged from 14-55 and 24-48 individuals, respectively. However, temporal methods suffered from a large variation in confidence intervals and concerns about the prior information. We conclude that the single-sample estimators are an acceptable short-cut to estimate N(bI for species such as geckos and will be of great importance for the monitoring of species in fragmented landscapes.
Automated flow cytometric analysis across large numbers of samples and cell types.
Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno
2015-04-01
Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.
The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life
Tinungki, Georgina Maria
2018-03-01
The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.
Law of Large Numbers: the Theory, Applications and Technology-based Education.
Dinov, Ivo D; Christou, Nicolas; Gould, Robert
2009-03-01
Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).
Characterization of General TCP Traffic under a Large Number of Flows Regime
National Research Council Canada - National Science Library
Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M
2002-01-01
.... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...
Hooshyar, M.; Wang, D.
2016-12-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
International Nuclear Information System (INIS)
Mahlstedt, J.
1977-01-01
The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de
Similarities between 2D and 3D convection for large Prandtl number
Indian Academy of Sciences (India)
2016-06-18
RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...
Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States
Ivan, Ion; Ciurea, Cristian; Pavel, Sorin
2010-01-01
The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)
On random number generators providing convergence more rapid than 1/√N
International Nuclear Information System (INIS)
Belov, V.A.
1982-01-01
To realize the simulation of processes in High Energy Physics a practical test of the efficiency in applying quasirandom numbers to check multiple integration with Monte-Karlo method is presented together with the comparison of the wellknown generators of quasirandom and pseudorandom numbers [ru
Whelan, Simon
2007-10-01
Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.
Fuller, Nathaniel J.; Licata, Nicholas A.
2018-05-01
Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.
More Than a "Number": Perspectives of Prenatal Care Quality from Mothers of Color and Providers.
Coley, Sheryl L; Zapata, Jasmine Y; Schwei, Rebecca J; Mihalovic, Glen Ellen; Matabele, Maya N; Jacobs, Elizabeth A; Anderson, Cynthie K
African American mothers and other mothers of historically underserved populations consistently have higher rates of adverse birth outcomes than White mothers. Increasing prenatal care use among these mothers may reduce these disparities. Most prenatal care research focuses on prenatal care adequacy rather than concepts of quality. Even less research examines the dual perspectives of African American mothers and prenatal care providers. In this qualitative study, we compared perceptions of prenatal care quality between African American and mixed race mothers and prenatal care providers. Prenatal care providers (n = 20) and mothers who recently gave birth (n = 19) completed semistructured interviews. Using a thematic analysis approach and Donabedian's conceptual model of health care quality, interviews were analyzed to identify key themes and summarize differences in perspectives between providers and mothers. Mothers and providers valued the tailoring of care based on individual needs and functional patient-provider relationships as key elements of prenatal care quality. Providers acknowledged the need for knowing the social context of patients, but mothers and providers differed in perspectives of "culturally sensitive" prenatal care. Although most mothers had positive prenatal care experiences, mothers also recalled multiple complications with providers' negative assumptions and disregard for mothers' options in care. Exploring strategies to strengthen patient-provider interactions and communication during prenatal care visits remains critical to address for facilitating continuity of care for mothers of color. These findings warrant further investigation of dual patient and provider perspectives of culturally sensitive prenatal care to address the service needs of African American and mixed race mothers. Copyright © 2017 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.
Secret Sharing Schemes with a large number of players from Toric Varieties
DEFF Research Database (Denmark)
Hansen, Johan P.
A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...
Energy Technology Data Exchange (ETDEWEB)
Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2013-10-31
We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.
ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2014-01-01
Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf
Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables
Directory of Open Access Journals (Sweden)
Jiangfeng Wang
2011-01-01
Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.
The lore of large numbers: some historical background to the anthropic principle
International Nuclear Information System (INIS)
Barrow, J.D.
1981-01-01
A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)
The three-large-primes variant of the number field sieve
S.H. Cavallar
2002-01-01
textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of
SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES
DEFF Research Database (Denmark)
Hansen, Johan Peder
2017-01-01
This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...
Optimal number of coarse-grained sites in different components of large biomolecular complexes.
Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A
2012-07-26
The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.
Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk
International Nuclear Information System (INIS)
Milinazzo, F.; Saffman, P.G.
1977-01-01
The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster
Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations
Energy Technology Data Exchange (ETDEWEB)
González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)
2017-06-28
We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.
Break down of the law of large numbers in Josephson junction series arrays
International Nuclear Information System (INIS)
Dominguez, D.; Cerdeira, H.A.
1995-01-01
We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs
Breakdown of the law of large numbers in Josephson junction series arrays
International Nuclear Information System (INIS)
Dominguez, D.; Cerdeira, H.A.
1994-01-01
We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs
The holographic dual of a Riemann problem in a large number of dimensions
Energy Technology Data Exchange (ETDEWEB)
Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)
2016-08-22
We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.
Phases of a stack of membranes in a large number of dimensions of configuration space
Borelli, M. E.; Kleinert, H.
2001-05-01
The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.
Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?
Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.
2013-01-01
Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...
Loss of locality in gravitational correlators with a large number of insertions
Ghosh, Sudip; Raju, Suvrat
2017-09-01
We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.
International Nuclear Information System (INIS)
Novak Pintarič, Zorka; Kravanja, Zdravko
2015-01-01
This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.
A full picture of large lepton number asymmetries of the Universe
Energy Technology Data Exchange (ETDEWEB)
Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)
2017-04-01
A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.
Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara
2017-07-01
Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E
2010-05-01
Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.
Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours
Energy Technology Data Exchange (ETDEWEB)
Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)
2014-06-27
We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.
Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis
International Nuclear Information System (INIS)
Qadir, A.; Mufti, A.A.
1980-07-01
Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)
Law of large numbers and central limit theorem for randomly forced PDE's
Shirikyan, A
2004-01-01
We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.
On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means
Directory of Open Access Journals (Sweden)
George Livadiotis
2017-05-01
Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.
Superposition of elliptic functions as solutions for a large number of nonlinear equations
International Nuclear Information System (INIS)
Khare, Avinash; Saxena, Avadh
2014-01-01
For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations
International Nuclear Information System (INIS)
Patil, Sunil; Tafti, Danesh
2012-01-01
Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.
Conformal window in QCD for large numbers of colors and flavors
International Nuclear Information System (INIS)
Zhitnitsky, Ariel R.
2014-01-01
We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase
Vicious random walkers in the limit of a large number of walkers
International Nuclear Information System (INIS)
Forrester, P.J.
1989-01-01
The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p
47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.
2010-10-01
... require software but not hardware changes to provide portability (“Hardware Capable Switches”), within 60... queries, so that they can deliver calls from their networks to any party that has retained its number after switching from one telecommunications carrier to another. (c) [Reserved] (d) In the event a...
International Nuclear Information System (INIS)
Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.
2012-01-01
Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.
International Nuclear Information System (INIS)
Ye Peng-Cheng; Pan Guang
2015-01-01
Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
Spyropoulos, Evangelos T.; Holmes, Bayard S.
1997-01-01
The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.
System for high-voltage control detectors with large number photomultipliers
International Nuclear Information System (INIS)
Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.
1985-01-01
A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed
Early Peritonitis in a Large Peritoneal Dialysis Provider System in Colombia.
Vargas, Edgar; Blake, Peter G; Sanabria, Mauricio; Bunch, Alfonso; López, Patricia; Vesga, Jasmín; Buitrago, Alberto; Astudillo, Kindar; Devia, Martha; Sánchez, Ricardo
♦ BACKGROUND: Peritonitis is the most important complication of peritoneal dialysis (PD), and early peritonitis rate is predictive of the subsequent course on PD. Our aim was to calculate the early peritonitis rate and to identify characteristics and predisposing factors in a large nationwide PD provider network in Colombia. ♦ METHODS: This was a historical observational cohort study of all adult patients starting PD between January 1, 2012, and December 31, 2013, in 49 renal facilities in the Renal Therapy Services in Colombia. We studied the peritonitis rate in the first 90 days of treatment, its causative micro-organisms, its predictors and its variation with time on PD and between individual facilities. ♦ RESULTS: A total of 3,525 patients initiated PD, with 176 episodes of peritonitis during 752 patient-years of follow-up for a rate of 0.23 episodes per patient year equivalent to 1 every 52 months. In 41 of 49 units, the rate was better than 1 per 33 months, and in 45, it was better than 1 per 24 months. Peritonitis rates did not differ with age, ethnicity, socioeconomic status, or PD modality. We identified high incidence risk periods at 2 to 5 weeks after initiation of PD and again at 10 to 12 weeks. ♦ CONCLUSION: An excellent peritonitis rate was achieved across a large nationwide network. This occurred in the context of high nationwide PD utilization and despite high rates of socioeconomic deprivation. We propose that a key factor in achieving this was a standardized approach to management of patients. Copyright © 2017 International Society for Peritoneal Dialysis.
International Nuclear Information System (INIS)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors
Decision process in MCDM with large number of criteria and heterogeneous risk preferences
Directory of Open Access Journals (Sweden)
Jian Liu
Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision
Estimating the cost of skin cancer detection by dermatology providers in a large health care system.
Matsumoto, Martha; Secrest, Aaron; Anderson, Alyce; Saul, Melissa I; Ho, Jonhan; Kirkwood, John M; Ferris, Laura K
2018-04-01
Data on the cost and efficiency of skin cancer detection through total body skin examination are scarce. To determine the number needed to screen (NNS) and biopsy (NNB) and cost per skin cancer diagnosed in a large dermatology practice in patients undergoing total body skin examination. This is a retrospective observational study. During 2011-2015, a total of 20,270 patients underwent 33,647 visits for total body skin examination; 9956 lesion biopsies were performed yielding 2763 skin cancers, including 155 melanomas. The NNS to detect 1 skin cancer was 12.2 (95% confidence interval [CI] 11.7-12.6) and 1 melanoma was 215 (95% CI 185-252). The NNB to detect 1 skin cancer was 3.0 (95% CI 2.9-3.1) and 1 melanoma was 27.8 (95% CI 23.3-33.3). In a multivariable model for NNS, age and personal history of melanoma were significant factors. Age switched from a protective factor to a risk factor at 51 years of age. The estimated cost per melanoma detected was $32,594 (95% CI $27,326-$37,475). Data are from a single health care system and based on physician coding. Melanoma detection through total body skin examination is most efficient in patients ≥50 years of age and those with a personal history of melanoma. Our findings will be helpful in modeling the cost effectiveness of melanoma screening by dermatologists. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
2006-01-01
A method of providing or transporting a timing signal between a number of circuits, electrical or optical, where each circuit is fed by a node. The nodes forward timing signals between each other, and at least one node is adapted to not transmit a timing signal before having received a timing...... signal from at least two nodes. In this manner, the direction of the timing skew between nodes and circuits is known and data transport between the circuits made easier....
CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs
Directory of Open Access Journals (Sweden)
Djordjevic Marko
2012-07-01
Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two
Directory of Open Access Journals (Sweden)
KeeHyun Park
2015-01-01
Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.
Dam risk reduction study for a number of large tailings dams in Ontario
Energy Technology Data Exchange (ETDEWEB)
Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)
2009-07-01
This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.
Space Situational Awareness of Large Numbers of Payloads From a Single Deployment
Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.
2014-09-01
The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft
Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers
Sadr, Reza; Cheng, Way Lee
2017-11-01
Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.
Growth of equilibrium structures built from a large number of distinct component types.
Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen
2014-09-14
We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.
Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis
International Nuclear Information System (INIS)
Simaciu, I.; Dumitrescu, G.
1993-01-01
The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)
Baker, Michael G; Zhang, Jane; Blakely, Tony; Crane, Julian; Saville-Smith, Kay; Howden-Chapman, Philippa
2016-02-16
Despite the importance of adequate, un-crowded housing as a prerequisite for good health, few large cohort studies have explored the health effects of housing conditions. The Social Housing Outcomes Worth (SHOW) Study was established to assess the relationship between housing conditions and health, particularly between household crowding and infectious diseases. This paper reports on the methods and feasibility of using a large administrative housing database for epidemiological research and the characteristics of the social housing population. This prospective open cohort study was established in 2003 in collaboration with Housing New Zealand Corporation which provides housing for approximately 5% of the population. The Study measures health outcomes using linked anonymised hospitalisation and mortality records provided by the New Zealand Ministry of Health. It was possible to match the majority (96%) of applicant and tenant household members with their National Health Index (NHI) number allowing linkage to anonymised coded data on their hospitalisations and mortality. By December 2011, the study population consisted of 11,196 applicants and 196,612 tenants. Half were less than 21 years of age. About two-thirds identified as Māori or Pacific ethnicity. Household incomes were low. Of tenant households, 44% containing one or more smokers compared with 33% for New Zealand as a whole. Exposure to household crowding, as measured by a deficit of one or more bedrooms, was common for applicants (52%) and tenants (38%) compared with New Zealanders as whole (10%). This project has shown that an administrative housing database can be used to form a large cohort population and successfully link cohort members to their health records in a way that meets confidentiality and ethical requirements. This study also confirms that social housing tenants are a highly deprived population with relatively low incomes and high levels of exposure to household crowding and environmental
Directory of Open Access Journals (Sweden)
Michael G. Baker
2016-02-01
Full Text Available Abstract Background Despite the importance of adequate, un-crowded housing as a prerequisite for good health, few large cohort studies have explored the health effects of housing conditions. The Social Housing Outcomes Worth (SHOW Study was established to assess the relationship between housing conditions and health, particularly between household crowding and infectious diseases. This paper reports on the methods and feasibility of using a large administrative housing database for epidemiological research and the characteristics of the social housing population. Methods This prospective open cohort study was established in 2003 in collaboration with Housing New Zealand Corporation which provides housing for approximately 5 % of the population. The Study measures health outcomes using linked anonymised hospitalisation and mortality records provided by the New Zealand Ministry of Health. Results It was possible to match the majority (96 % of applicant and tenant household members with their National Health Index (NHI number allowing linkage to anonymised coded data on their hospitalisations and mortality. By December 2011, the study population consisted of 11,196 applicants and 196,612 tenants. Half were less than 21 years of age. About two-thirds identified as Māori or Pacific ethnicity. Household incomes were low. Of tenant households, 44 % containing one or more smokers compared with 33 % for New Zealand as a whole. Exposure to household crowding, as measured by a deficit of one or more bedrooms, was common for applicants (52 % and tenants (38 % compared with New Zealanders as whole (10 %. Conclusions This project has shown that an administrative housing database can be used to form a large cohort population and successfully link cohort members to their health records in a way that meets confidentiality and ethical requirements. This study also confirms that social housing tenants are a highly deprived population with relatively low
On the chromatic number of triangle-free graphs of large minimum degree
DEFF Research Database (Denmark)
Thomassen, Carsten
2002-01-01
We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c
On the chromatic number of pentagon-free graphs of large minimum degree
DEFF Research Database (Denmark)
Thomassen, Carsten
2007-01-01
We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...
Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian
2008-08-22
Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.
The Love of Large Numbers: A Popularity Bias in Consumer Choice.
Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J
2017-10-01
Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.
Providing the Public with Online Access to Large Bibliographic Data Bases.
Firschein, Oscar; Summit, Roger K.
DIALOG, an interactive, computer-based information retrieval language, consists of a series of computer programs designed to make use of direct access memory devices in order to provide the user with a rapid means of identifying records within a specific memory bank. Using the system, a library user can be provided access to sixteen distinct and…
On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems
National Research Council Canada - National Science Library
Tinnakornsrisuphap, Peerapol; Makowski, Armand M
2005-01-01
.... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...
Directory of Open Access Journals (Sweden)
Margareth Regina Dibo
2013-07-01
Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.
Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel
2017-04-01
Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For
The Efficiency and Productivity Analysis of Large Logistics Providers Services in Korea
Directory of Open Access Journals (Sweden)
Hong Gyun Park
2015-12-01
Full Text Available In the fierce competition at the global logistics markets, Korean logistics providers were deemed more vulnerable than global logistics providers in terms of the quality and price competitiveness. To strengthen their competitiveness, logistics providers in Korea have focused on delivering integrated logistics services. In this regard, the Korean government has enacted the “Integrated Logistics Industry Certification Act” in 2006 to assist integrated logistics providers to offer logistics services based on their specialization and differentiation. It has been several years since the system was implemented, and the evaluation of the system implementation was necessary. Hence, in our study, we attempt to examine the efficiency and productivity of fourteen certified Korean logistics providers employing the DEA (Data Envelopment Analysis method with a five-year panel data since the inception of the Act. Through our static and dynamic analyses, We found that Pantos Logistics and HYUNDAI Glovis are running their businesses at the highest level of efficiency and Hanjin Transportation was the most stable company in their logistics operation.
DEFF Research Database (Denmark)
Jensen, Michael Vincent; Walther, Jens Honore
2013-01-01
was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....
Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers
Cheng, W.
2017-11-27
We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow
A Recommender System for an IPTV Service Provider: a Real Large-Scale Production Environment
Bambini, Riccardo; Cremonesi, Paolo; Turrin, Roberto
In this chapter we describe the integration of a recommender system into the production environment of Fastweb, one of the largest European IP Television (IPTV) providers. The recommender system implements both collaborative and content-based techniques, suitable tailored to the specific requirements of an IPTV architecture, such as the limited screen definition, the reduced navigation capabilities, and the strict time constraints. The algorithms are extensively analyzed by means of off-line and on-line tests, showing the effectiveness of the recommender systems: up to 30% of the recommendations are followed by a purchase, with an estimated lift factor (increase in sales) of 15%.
International Nuclear Information System (INIS)
Caldirola, P.; Recami, E.
1978-01-01
By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)
Hooshyar, Milad; Wang, Dingbao
2016-08-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling
DEFF Research Database (Denmark)
Sarlak Chivaee, Hamid
2017-01-01
This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...
A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan
Directory of Open Access Journals (Sweden)
Wu Jer-Yuarn
2008-12-01
Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.
Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J
2016-05-31
Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of
Q-factorial Gorenstein toric Fano varieties with large Picard number
DEFF Research Database (Denmark)
Nill, Benjamin; Øbro, Mikkel
2010-01-01
In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...
A comment on "bats killed in large numbers at United States wind energy facilities"
Huso, Manuela M.P.; Dalthorp, Dan
2014-01-01
Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.
Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number
Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco
2016-11-01
A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.
Efficient high speed communications over electrical powerlines for a large number of users
Energy Technology Data Exchange (ETDEWEB)
Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering
2007-07-01
Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.
Mohammed, Ali Ibrahim Ali
The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to
Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers
International Nuclear Information System (INIS)
Malcolm, J.; Andrews, Ph.D.
2004-01-01
This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations
Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm
2017-03-01
shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in
Analyzing the Large Number of Variables in Biomedical and Satellite Imagery
Good, Phillip I
2011-01-01
This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met
International Nuclear Information System (INIS)
Lee, Hwang; Kok, Pieter; Dowling, Jonathan P.; Cerf, Nicolas J.
2002-01-01
We propose a method for preparing maximal path entanglement with a definite photon-number N, larger than two, using projective measurements. In contrast with the previously known schemes, our method uses only linear optics. Specifically, we exhibit a way of generating four-photon, path-entangled states of the form vertical bar 4,0>+ vertical bar 0,4>, using only four beam splitters and two detectors. These states are of major interest as a resource for quantum interferometric sensors as well as for optical quantum lithography and quantum holography
Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.
2006-01-01
Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.
EUPAN enables pan-genome studies of a large number of eukaryotic genomes.
Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun
2017-08-01
Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Number of deaths due to lung diseases: How large is the problem?
International Nuclear Information System (INIS)
Wagener, D.K.
1990-01-01
The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings
Formation of free round jets with long laminar regions at large Reynolds numbers
Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander
2018-04-01
The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.
Directory of Open Access Journals (Sweden)
Jesús García Herrero
2003-07-01
Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.
Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations
DEFF Research Database (Denmark)
Jensen, Michael Vincent; Walther, Jens Honore
2010-01-01
Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...
On the strong law of large numbers for $\\varphi$-subgaussian random variables
Zajkowski, Krzysztof
2016-01-01
For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...
Large boson number IBM calculations and their relationship to the Bohr model
International Nuclear Information System (INIS)
Thiamova, G.; Rowe, D.J.
2009-01-01
Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)
2016-04-12
Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Ditlevsen, Ove
2005-01-01
The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...
International Nuclear Information System (INIS)
Arvieu, R.
The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr
Directory of Open Access Journals (Sweden)
Huilin Huang
2014-01-01
Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.
Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.
2006-01-01
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods
Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park
Directory of Open Access Journals (Sweden)
Ferreira Sanette L.A.
2016-09-01
Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.
Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.
Cleophas, Ton J
2016-01-01
Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.
Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects
International Nuclear Information System (INIS)
Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.
2011-01-01
Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.
Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects
Energy Technology Data Exchange (ETDEWEB)
Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)
2011-07-15
Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.
KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS
Directory of Open Access Journals (Sweden)
ENEA CONSTANTA
2016-06-01
Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism
Secondary organic aerosol formation from a large number of reactive man-made organic compounds
Energy Technology Data Exchange (ETDEWEB)
Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)
2010-07-15
A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.
Normal zone detectors for a large number of inductively coupled coils
International Nuclear Information System (INIS)
Owen, E.W.; Shimer, D.W.
1983-01-01
In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent
Normal zone detectors for a large number of inductively coupled coils. Revision 1
International Nuclear Information System (INIS)
Owen, E.W.; Shimer, D.W.
1983-01-01
In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed
Normal zone detectors for a large number of inductively coupled coils
International Nuclear Information System (INIS)
Owen, E.W.; Shimer, D.W.
1983-01-01
In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed
Alboruto, Venus M.
2017-05-01
The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.
Instrumentation and controls training in TVA: a story of large numbers
International Nuclear Information System (INIS)
Conner, D.L.
1981-01-01
The Tennessee Valley Authority currently has four nuclear units on line with seventeen units planned prior to the year 2000. Providing a permanent instrumentation and controls workforce, projected to be approximately 700 trades and labor craftsmen, is a formidable task requiring the cooperative efforts of many diverse groups: mangement and labor, engineers and craftsmen, students and instructors. In TVA, the primary source of skilled I and C craftsmen is through formal training programs conducted in cooperation with the craft bargaining unit, the International Brotherhood of Electrical Workers. The purpose of this paper is to describe and review the progress of two of these programs: the Instrument Mechanic Training Program and the Senior Instrument Mechanic Training Program
Kim, Jibum; Shin, Hee-Choon; Rosen, Zohn; Kang, Jeong-han; Dykema, Jennifer; Muennig, Peter
2015-01-01
Privacy and confidentiality are often of great concern to respondents answering sensitive questions posed by interviewers. Using the 1993-2010 General Social Survey, we examined trends in the provision of social security numbers (SSNs) and correlates of those responses. Results indicate that the rate of SSN provision has declined over the past…
Tiselj, Iztok
2014-12-01
Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and
What caused a large number of fatalities in the Tohoku earthquake?
Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.
2012-04-01
The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced
Surface modes of ultra-cold atomic clouds with very large number of vortices
Energy Technology Data Exchange (ETDEWEB)
Cazalilla, M A [Donostia International Physics Center, Donostia (Spain); [Abdus Salam International Centre for Theoretical Physics, Trieste (Italy)
2003-04-01
We study the surface modes of some of the vortex liquids recently found by means of exact diagonalizations in systems of rapidly rotating bosons. In contrast to the surface modes of Bose condensates, we find that the surface waves have a frequency linear in the excitation angular momentum, h-bar l > 0. Furthermore, in analogy with the edge waves of electronic quantum Hall states, these excitations are chiral, that is, they can be excited only for values of l that increase the total angular momentum of the vortex liquid. However, differently from the quantum Hall phenomena for electrons, we also find other excitations that are approximately degenerate in the laboratory frame with the surface modes, and which decrease the total angular momentum by l quanta. The surface modes of the Laughlin, as well as other scalar and vector boson states are analyzed, and their observable properties characterized. We argue that measurement of the response of a vortex liquid to a weak time-dependent potential that imparts angular momentum to the system should provide valuable information to characterize the vortex liquid. In particular, the intensity of the signal of the surface waves in the dynamic structure factor has been studied and found to depend on the type of vortex liquid. We point out that the existence of surface modes has observable consequences on the density profile of the Laughlin state. These features are due to the strongly correlated behavior of atoms in the vortex liquids. We point out that these correlations should be responsible for a remarkable stability of some vortex liquids with respect to three-body losses. (author)
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association
Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram
2017-03-13
A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Ross-Adams, H.; Lamb, A.D.; Dunning, M.J.; Halim, S.; Lindberg, J.; Massie, C.M.; Egevad, L.A.; Russell, R.; Ramos-Montoya, A.; Vowler, S.L.; Sharma, N.L.; Kay, J.; Whitaker, H.; Clark, J.; Hurst, R.; Gnanapragasam, V.J.; Shah, N.C.; Warren, A.Y.; Cooper, C.S.; Lynch, A.G.; Stark, R.; Mills, I.G.; Grönberg, H.; Neal, D.E.
2015-01-01
Background Understanding the heterogeneous genotypes and phenotypes of prostate cancer is fundamental to improving the way we treat this disease. As yet, there are no validated descriptions of prostate cancer subgroups derived from integrated genomics linked with clinical outcome. Methods In a study of 482 tumour, benign and germline samples from 259 men with primary prostate cancer, we used integrative analysis of copy number alterations (CNA) and array transcriptomics to identify genomic loci that affect expression levels of mRNA in an expression quantitative trait loci (eQTL) approach, to stratify patients into subgroups that we then associated with future clinical behaviour, and compared with either CNA or transcriptomics alone. Findings We identified five separate patient subgroups with distinct genomic alterations and expression profiles based on 100 discriminating genes in our separate discovery and validation sets of 125 and 103 men. These subgroups were able to consistently predict biochemical relapse (p = 0.0017 and p = 0.016 respectively) and were further validated in a third cohort with long-term follow-up (p = 0.027). We show the relative contributions of gene expression and copy number data on phenotype, and demonstrate the improved power gained from integrative analyses. We confirm alterations in six genes previously associated with prostate cancer (MAP3K7, MELK, RCBTB2, ELAC2, TPD52, ZBTB4), and also identify 94 genes not previously linked to prostate cancer progression that would not have been detected using either transcript or copy number data alone. We confirm a number of previously published molecular changes associated with high risk disease, including MYC amplification, and NKX3-1, RB1 and PTEN deletions, as well as over-expression of PCA3 and AMACR, and loss of MSMB in tumour tissue. A subset of the 100 genes outperforms established clinical predictors of poor prognosis (PSA, Gleason score), as well as previously published gene
Directory of Open Access Journals (Sweden)
Annelies CEULEMANS
2014-03-01
Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.
2012-01-01
Background Today patients can consult with their treating physician by cell phone or e-mail. These means of communication enhance the quality of medical care and increase patient satisfaction, but they can also impinge on physicians’ free time and their patient schedule while at work. The objective of this study is to assess the attitudes and practice of patients on obtaining the cell phone number or e-mail address of their physician for the purpose of medical consultation. Methods Personal interviews with patients, 18 years of age or above, selected by random sampling from the roster of adults insured by Clalit Health Services, Southern Division. The total response rate was 41%. The questionnaire included questions on the attitude and practice of patients towards obtaining their physician’s cell phone number or e-mail address. Comparisons were performed using Chi-square tests to analyze statistically significant differences of categorical variables. Two-tailed p values less than 0.05 were considered statistically significant, with a power of 0.8. Results The study sample included 200 patients with a mean age of 46.6 ± 17.1, of whom 110 were women (55%). Ninety-three (46.5%) responded that they would be very interested in obtaining their physician’s cell phone number, and an additional 83 (41.5%) would not object to obtaining it. Of the 171 patients (85.5%) who had e-mail addresses, 25 (14.6%) said they would be very interested in obtaining their physician’s e-mail address, 85 (49.7%) said they would not object to getting it, and 61 (35.7%) were not interested. In practice only one patient had requested the physician’s e-mail address and none actually had it. Conclusions Patients favored cell phones over e-mail for consulting with their treating physicians. With new technologies such as cell phones and e-mail in common use, it is important to determine how they can be best used and how they should be integrated into the flow of clinical practice
Directory of Open Access Journals (Sweden)
Peleg Roni
2012-08-01
Full Text Available Abstract Background Today patients can consult with their treating physician by cell phone or e-mail. These means of communication enhance the quality of medical care and increase patient satisfaction, but they can also impinge on physicians’ free time and their patient schedule while at work. The objective of this study is to assess the attitudes and practice of patients on obtaining the cell phone number or e-mail address of their physician for the purpose of medical consultation. Methods Personal interviews with patients, 18 years of age or above, selected by random sampling from the roster of adults insured by Clalit Health Services, Southern Division. The total response rate was 41%. The questionnaire included questions on the attitude and practice of patients towards obtaining their physician’s cell phone number or e-mail address. Comparisons were performed using Chi-square tests to analyze statistically significant differences of categorical variables. Two-tailed p values less than 0.05 were considered statistically significant, with a power of 0.8. Results The study sample included 200 patients with a mean age of 46.6 ± 17.1, of whom 110 were women (55%. Ninety-three (46.5% responded that they would be very interested in obtaining their physician’s cell phone number, and an additional 83 (41.5% would not object to obtaining it. Of the 171 patients (85.5% who had e-mail addresses, 25 (14.6% said they would be very interested in obtaining their physician’s e-mail address, 85 (49.7% said they would not object to getting it, and 61 (35.7% were not interested. In practice only one patient had requested the physician’s e-mail address and none actually had it. Conclusions Patients favored cell phones over e-mail for consulting with their treating physicians. With new technologies such as cell phones and e-mail in common use, it is important to determine how they can be best used and how they should be integrated into the flow
Directory of Open Access Journals (Sweden)
Bao Wang
2014-01-01
Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.
Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel
2018-03-01
Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.
On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System
Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim
2015-01-01
In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.
On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System
Makki, Behrooz
2015-11-12
In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.
Rutkowski, David J.; Prusinski, Ellen L.
2011-01-01
The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Ditlevsen, Ove
2005-01-01
The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...
Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.
1974-07-15
In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.
Energy Technology Data Exchange (ETDEWEB)
Andersson, Bertil; Holmberg, Rikard
2010-08-15
This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack
Asan, Onur; Holden, Richard J; Flynn, Kathryn E; Yang, Yushi; Azam, Laila; Scanlon, Matthew C
2016-07-20
The purpose of this study was to explore providers' perspectives on the use of a novel technology, "Large Customizable Interactive Monitor" (LCIM), a novel application of the electronic health record system implemented in a Pediatric Intensive Care Unit. We employed a qualitative approach to collect and analyze data from pediatric intensive care physicians, pediatric nurse practitioners, and acute care specialists. Using semi-structured interviews, we collected data from January to April, 2015. The research team analyzed the transcripts using an iterative coding method to identify common themes. Study results highlight contextual data on providers' use routines of the LCIM. Findings from thirty six interviews were classified into three groups: 1) providers' familiarity with the LCIM; 2) providers' use routines (i.e. when and how they use it); and 3) reasons why they use or do not use it. It is important to conduct baseline studies of the use of novel technologies. The importance of training and orientation affects the adoption and use patterns of this new technology. This study is notable for being the first to investigate a LCIM system, a next generation system implemented in the pediatric critical care setting. Our study revealed this next generation HIT might have great potential for family-centered rounds, team education during rounds, and family education/engagement in their child's health in the patient room. This study also highlights the effect of training and orientation on the adoption patterns of new technology.
Yoon, Sung Hwan
2017-10-12
According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.
Directory of Open Access Journals (Sweden)
Daniel Pettersson
2016-01-01
later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10
Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst
2015-01-01
The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935
Directory of Open Access Journals (Sweden)
Sarah Bergfelder-Drüing
Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.
Directory of Open Access Journals (Sweden)
Varala Kranthi
2007-05-01
Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.
Montagnini, Marcos; Smith, Heather M; Price, Deborah M; Ghosh, Bidisha; Strodtman, Linda
2018-01-01
In the United States, most deaths occur in hospitals, with approximately 25% of hospitalized patients having palliative care needs. Therefore, the provision of good end-of-life (EOL) care to these patients is a priority. However, research assessing staff preparedness for the provision of EOL care to hospitalized patients is lacking. To assess health-care professionals' self-perceived competencies regarding the provision of EOL care in hospitalized patients. Descriptive study of self-perceived EOL care competencies among health-care professionals. The study instrument (End-of-Life Questionnaire) contains 28 questions assessing knowledge, attitudes, and behaviors related to the provision of EOL care. Health-care professionals (nursing, medicine, social work, psychology, physical, occupational and respiratory therapist, and spiritual care) at a large academic medical center participated in the study. Means were calculated for each item, and comparisons of mean scores were conducted via t tests. Analysis of variance was used to identify differences among groups. A total of 1197 questionnaires was completed. The greatest self-perceived competency was in providing emotional support for patients/families, and the least self-perceived competency was in providing continuity of care. When compared to nurses, physicians had higher scores on EOL care attitudes, behaviors, and communication. Physicians and nurses had higher scores on most subscales than other health-care providers. Differences in self-perceived EOL care competencies were identified among disciplines, particularly between physicians and nurses. The results provide evidence for assessing health-care providers to identify their specific training needs before implementing educational programs on EOL care.
Factors associated with self-reported number of teeth in a large national cohort of Thai adults
Directory of Open Access Journals (Sweden)
Yiengprugsawan Vasoontara
2011-11-01
Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.
International Nuclear Information System (INIS)
Guerrero, M; Li, X Allen
2003-01-01
Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro
International Nuclear Information System (INIS)
Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.
2003-01-01
Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)
International Nuclear Information System (INIS)
Thompson, G.A.; Davies, H.M.; McDonald, N.
1985-01-01
A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application
McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M
2016-01-01
Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.
International Nuclear Information System (INIS)
Peng Huanwu
2005-01-01
Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.
Directory of Open Access Journals (Sweden)
Emilie Sapin
Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.
Neggers, R.
2017-12-01
Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask
International Nuclear Information System (INIS)
Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou
2011-01-01
CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.
International Nuclear Information System (INIS)
Kun, S.Yu.
1985-01-01
On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed
Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph
Xue, Xiaofeng
2017-11-01
In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.
Energy Technology Data Exchange (ETDEWEB)
Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)
2011-12-15
A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)
Directory of Open Access Journals (Sweden)
Mandy Muller
Full Text Available Human Papillomaviruses (HPV cause widespread infections in humans, resulting in latent infections or diseases ranging from benign hyperplasia to cancers. HPV-induced pathologies result from complex interplays between viral proteins and the host proteome. Given the major public health concern due to HPV-associated cancers, most studies have focused on the early proteins expressed by HPV genotypes with high oncogenic potential (designated high-risk HPV or HR-HPV. To advance the global understanding of HPV pathogenesis, we mapped the virus/host interaction networks of the E2 regulatory protein from 12 genotypes representative of the range of HPV pathogenicity. Large-scale identification of E2-interaction partners was performed by yeast two-hybrid screenings of a HaCaT cDNA library. Based on a high-confidence scoring scheme, a subset of these partners was then validated for pair-wise interaction in mammalian cells with the whole range of the 12 E2 proteins, allowing a comparative interaction analysis. Hierarchical clustering of E2-host interaction profiles mostly recapitulated HPV phylogeny and provides clues to the involvement of E2 in HPV infection. A set of cellular proteins could thus be identified discriminating, among the mucosal HPV, E2 proteins of HR-HPV 16 or 18 from the non-oncogenic genital HPV. The study of the interaction networks revealed a preferential hijacking of highly connected cellular proteins and the targeting of several functional families. These include transcription regulation, regulation of apoptosis, RNA processing, ubiquitination and intracellular trafficking. The present work provides an overview of E2 biological functions across multiple HPV genotypes.
Muller, Mandy; Jacob, Yves; Jones, Louis; Weiss, Amélie; Brino, Laurent; Chantier, Thibault; Lotteau, Vincent; Favre, Michel; Demeret, Caroline
2012-01-01
Human Papillomaviruses (HPV) cause widespread infections in humans, resulting in latent infections or diseases ranging from benign hyperplasia to cancers. HPV-induced pathologies result from complex interplays between viral proteins and the host proteome. Given the major public health concern due to HPV-associated cancers, most studies have focused on the early proteins expressed by HPV genotypes with high oncogenic potential (designated high-risk HPV or HR-HPV). To advance the global understanding of HPV pathogenesis, we mapped the virus/host interaction networks of the E2 regulatory protein from 12 genotypes representative of the range of HPV pathogenicity. Large-scale identification of E2-interaction partners was performed by yeast two-hybrid screenings of a HaCaT cDNA library. Based on a high-confidence scoring scheme, a subset of these partners was then validated for pair-wise interaction in mammalian cells with the whole range of the 12 E2 proteins, allowing a comparative interaction analysis. Hierarchical clustering of E2-host interaction profiles mostly recapitulated HPV phylogeny and provides clues to the involvement of E2 in HPV infection. A set of cellular proteins could thus be identified discriminating, among the mucosal HPV, E2 proteins of HR-HPV 16 or 18 from the non-oncogenic genital HPV. The study of the interaction networks revealed a preferential hijacking of highly connected cellular proteins and the targeting of several functional families. These include transcription regulation, regulation of apoptosis, RNA processing, ubiquitination and intracellular trafficking. The present work provides an overview of E2 biological functions across multiple HPV genotypes.
Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba
2017-12-23
Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.
Feldmann, Daniel; Bauer, Christian; Wagner, Claus
2018-03-01
We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.
2014-01-01
Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054
Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen
2014-06-21
Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.
International Nuclear Information System (INIS)
Selander, W.N.; Lane, F.E.; Rowat, J.H.
1995-05-01
A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs
Ooi, Seng-Keat
2005-11-01
Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.
Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M
2017-08-01
Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.
Meng, Xuhui; Guo, Zhaoli
2015-10-01
A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.
International Nuclear Information System (INIS)
McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.
2011-01-01
Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is
Martin, Peter; Davies, Roger; Macdougall, Amy; Ritchie, Benjamin; Vostanis, Panos; Whale, Andy; Wolpert, Miranda
2017-09-01
Case-mix classification is a focus of international attention in considering how best to manage and fund services, by providing a basis for fairer comparison of resource utilization. Yet there is little evidence of the best ways to establish case mix for child and adolescent mental health services (CAMHS). To develop a case mix classification for CAMHS that is clinically meaningful and predictive of number of appointments attended and to investigate the influence of presenting problems, context and complexity factors and provider variation. We analysed 4573 completed episodes of outpatient care from 11 English CAMHS. Cluster analysis, regression trees and a conceptual classification based on clinical best practice guidelines were compared regarding their ability to predict number of appointments, using mixed effects negative binomial regression. The conceptual classification is clinically meaningful and did as well as data-driven classifications in accounting for number of appointments. There was little evidence for effects of complexity or context factors, with the possible exception of school attendance problems. Substantial variation in resource provision between providers was not explained well by case mix. The conceptually-derived classification merits further testing and development in the context of collaborative decision making.
International Nuclear Information System (INIS)
Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik
2014-01-01
To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey
Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary
2015-01-01
Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP
Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola
2016-01-01
Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…
Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng
2017-11-01
This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.
Willems, Sara M.; Wright, D.J.; Day, Felix R.; Trajanoska, Katerina; Joshi, P.K.; Morris, John A.; Matteini, Amy M.; Garton, Fleur C.; Grarup, Niels; Oskolkov, Nikolay; Thalamuthu, Anbupalam; Mangino, Massimo; Liu, Jun; Demirkan, Ayse; Lek, Monkol; Xu, Liwen; Wang, Guan; Oldmeadow, Christopher; Gaulton, Kyle J.; Lotta, Luca A.; Miyamoto-Mikami, Eri; Rivas, Manuel A.; White, Tom; Loh, Po Ru; Aadahl, Mette; Amin, Najaf; Attia, John R.; Austin, Krista; Benyamin, Beben; Brage, Søren; Cheng, Yu Ching; Ciȩszczyk, Paweł; Derave, Wim; Eriksson, Karl Fredrik; Eynon, Nir; Linneberg, Allan; Lucia, Alejandro; Massidda, Myosotis; Mitchell, Braxton D.; Miyachi, Motohiko; Murakami, Haruka; Padmanabhan, Sandosh; Pandey, Ashutosh; Papadimitriou, Ioannis; Rajpal, Deepak K.; Sale, Craig; Schnurr, Theresia M.; Sessa, Francesco; Shrine, Nick; Tobin, Martin D.; Varley, Ian; Wain, Louise V.; Wray, Naomi R.; Lindgren, Cecilia M.; MacArthur, Daniel G.; Waterworth, Dawn M.; McCarthy, Mark I.; Pedersen, Oluf; Khaw, Kay Tee; Kiel, Douglas P.; Pitsiladis, Yannis; Fuku, Noriyuki; Franks, Paul W.; North, Kathryn N.; Duijn, Van C.M.; Mather, Karen A.; Hansen, Torben; Hansson, Ola; Spector, Tim D.; Murabito, Joanne M.; Richards, J.B.; Rivadeneira, Fernando; Langenberg, Claudia; Perry, John R.B.; Wareham, Nick J.; Scott, Robert A.; Oei, Ling; Zheng, Hou Feng; Forgetta, Vincenzo; Leong, Aaron; Ahmad, Omar S.; Laurin, Charles; Mokry, Lauren E.; Ross, Stephanie; Elks, Cathy E.; Bowden, Jack; Warrington, Nicole M.; Murray, Anna; Ruth, Katherine S.; Tsilidis, Konstantinos K.; Medina-Gómez, Carolina; Estrada, Karol; Bis, Joshua C.; Chasman, Daniel I.; Demissie, Serkalem; Enneman, Anke W.; Hsu, Yi Hsiang; Ingvarsson, Thorvaldur; Kähönen, Mika; Kammerer, Candace; Lacroix, Andrea Z.; Li, Guo; Liu, Ching Ti; Liu, Yongmei; Lorentzon, Mattias; Mägi, Reedik; Mihailov, Evelin; Milani, Lili; Moayyeri, Alireza; Nielson, Carrie M.; Sham, Pack Chung; Siggeirsdotir, Kristin; Sigurdsson, Gunnar; Stefansson, Kari; Trompet, Stella; Thorleifsson, Gudmar; Vandenput, Liesbeth; Velde, Van Der Nathalie; Viikari, Jorma; Xiao, Su Mei; Zhao, Jing Hua; Evans, Daniel S.; Cummings, Steven R.; Cauley, Jane; Duncan, Emma L.; Groot, De Lisette C.P.G.M.; Esko, Tonu; Gudnason, Vilmundar; Harris, Tamara B.; Jackson, Rebecca D.; Jukema, J.W.; Ikram, Arfan M.A.; Karasik, David; Kaptoge, Stephen; Kung, Annie Wai Chee; Lehtimäki, Terho; Lyytikäinen, Leo Pekka; Lips, Paul; Luben, Robert; Metspalu, Andres; Meurs, van Joyce B.; Minster, Ryan L.; Orwoll, Erick; Oei, Edwin; Psaty, Bruce M.; Raitakari, Olli T.; Ralston, Stuart W.; Ridker, Paul M.; Robbins, John A.; Smith, Albert V.; Styrkarsdottir, Unnur; Tranah, Gregory J.; Thorstensdottir, Unnur; Uitterlinden, Andre G.; Zmuda, Joseph; Zillikens, M.C.; Ntzani, Evangelia E.; Evangelou, Evangelos; Ioannidis, John P.A.; Evans, David M.; Ohlsson, Claes
2017-01-01
Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of
DEFF Research Database (Denmark)
Willems, Sara M.; Wright, Daniel J.; Day, Felix R.
2017-01-01
Hand grip strength is a widely used proxy of muscular fitness, a marker of frailty, and predictor of a range of morbidities and all-cause mortality. To investigate the genetic determinants of variation in grip strength, we perform a large-scale genetic discovery analysis in a combined sample of 1...
Gomes-Pereira, José Nuno; Carmo, Vanda; Catarino, Diana; Jakobsen, Joachim; Alvarez, Helena; Aguilar, Ricardo; Hart, Justin; Giacomello, Eva; Menezes, Gui; Stefanni, Sergio; Colaço, Ana; Morato, Telmo; Santos, Ricardo S.; Tempera, Fernando; Porteiro, Filipe
2017-11-01
Many fish species are well-known obligatory inhabitants of shallow-water tropical coral reefs but such associations are difficult to study in deep-water environments. We address the association between two deep-sea fish with low mobility and large sessile invertebrates using a compilation of 20 years of unpublished in situ observations. Data were collected on Northeast Atlantic (NEA) island slopes and seamounts, from the Azores to the Canary Islands, comprising 127 new records of the circalittoral Labridae Lappanella fasciata and 15 of the upper bathyal Ophiididae Benthocometes robustus. Observations by divers, remote operated vehicles (ROV SP, Luso, Victor, Falcon Seaeye), towed vehicles (Greenpeace) and manned submersibles (LULA, Nautile) validated the species association to cold water corals (CWC) and large hydrozoans. L. fasciata occurred from lower infralittoral (41 m) throughout the circalittoral, down to the upper bathyal at 398 m depth. Smaller fishes (fishes (10-15 cm) occurring alone or in smaller groups at greater depths. The labrids favoured areas with large sessile invertebrates (> 10 cm) occurring at habitat and this predator. Gathered evidence renders CWC and hydroid gardens as Essential Fish Habitats for both species, being therefore sensitive to environmental and anthropogenic impacts on these Vulnerable Marine Ecosystems. The Mediterranean distribution of L. fasciata is extended to NEA seamounts and island slopes and the amphi-Atlantic distribution of B. robustus is bridged with molecular data support. Both species are expected to occur throughout the Macaronesia and Mediterranean island slopes and shallow seamounts on habitats with large sessile invertebrates.
DEFF Research Database (Denmark)
Scott, Robert A; Lagou, Vasiliki; Welch, Ryan P
2012-01-01
Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have increased the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes...
Scott, Robert A.; Lagou, Vasiliki; Welch, Ryan P.; Wheeler, Eleanor; Montasser, May E.; Luan, Jian'an; Mägi, Reedik; Strawbridge, Rona J.; Rehnberg, Emil; Gustafsson, Stefan; Kanoni, Stavroula; Rasmussen-Torvik, Laura J.; Yengo, Loïc; Lecoeur, Cecile; Shungin, Dmitry; Sanna, Serena; Sidore, Carlo; Johnson, Paul C. D.; Jukema, J. Wouter; Johnson, Toby; Mahajan, Anubha; Verweij, Niek; Thorleifsson, Gudmar; Hottenga, Jouke-Jan; Shah, Sonia; Smith, Albert V.; Sennblad, Bengt; Gieger, Christian; Salo, Perttu; Perola, Markus; Timpson, Nicholas J.; Evans, David M.; Pourcain, Beate St; Wu, Ying; Andrews, Jeanette S.; Hui, Jennie; Bielak, Lawrence F.; Zhao, Wei; Horikoshi, Momoko; Navarro, Pau; Isaacs, Aaron; O'Connell, Jeffrey R.; Stirrups, Kathleen; Vitart, Veronique; Hayward, Caroline; Esko, Tõnu; Mihailov, Evelin; Fraser, Ross M.; Fall, Tove; Voight, Benjamin F.; Raychaudhuri, Soumya; Chen, Han; Lindgren, Cecilia M.; Morris, Andrew P.; Rayner, Nigel W.; Robertson, Neil; Rybin, Denis; Liu, Ching-Ti; Beckmann, Jacques S.; Willems, Sara M.; Chines, Peter S.; Jackson, Anne U.; Kang, Hyun Min; Stringham, Heather M.; Song, Kijoung; Tanaka, Toshiko; Peden, John F.; Goel, Anuj; Hicks, Andrew A.; An, Ping; Müller-Nurasyid, Martina; Franco-Cereceda, Anders; Folkersen, Lasse; Marullo, Letizia; Jansen, Hanneke; Oldehinkel, Albertine J.; Bruinenberg, Marcel; Pankow, James S.; North, Kari E.; Forouhi, Nita G.; Loos, Ruth J. F.; Edkins, Sarah; Varga, Tibor V.; Hallmans, Göran; Oksa, Heikki; Antonella, Mulas; Nagaraja, Ramaiah; Trompet, Stella; Ford, Ian; Bakker, Stephan J. L.; Kong, Augustine; Kumari, Meena; Gigante, Bruna; Herder, Christian; Munroe, Patricia B.; Caulfield, Mark; Antti, Jula; Mangino, Massimo; Small, Kerrin; Miljkovic, Iva; Liu, Yongmei; Atalay, Mustafa; Kiess, Wieland; James, Alan L.; Rivadeneira, Fernando; Uitterlinden, Andre G.; Palmer, Colin N. A.; Doney, Alex S. F.; Willemsen, Gonneke; Smit, Johannes H.; Campbell, Susan; Polasek, Ozren; Bonnycastle, Lori L.; Hercberg, Serge; Dimitriou, Maria; Bolton, Jennifer L.; Fowkes, Gerard R.; Kovacs, Peter; Lindström, Jaana; Zemunik, Tatijana; Bandinelli, Stefania; Wild, Sarah H.; Basart, Hanneke V.; Rathmann, Wolfgang; Grallert, Harald; Maerz, Winfried; Kleber, Marcus E.; Boehm, Bernhard O.; Peters, Annette; Pramstaller, Peter P.; Province, Michael A.; Borecki, Ingrid B.; Hastie, Nicholas D.; Rudan, Igor; Campbell, Harry; Watkins, Hugh; Farrall, Martin; Stumvoll, Michael; Ferrucci, Luigi; Waterworth, Dawn M.; Bergman, Richard N.; Collins, Francis S.; Tuomilehto, Jaakko; Watanabe, Richard M.; de Geus, Eco J. C.; Penninx, Brenda W.; Hofman, Albert; Oostra, Ben A.; Psaty, Bruce M.; Vollenweider, Peter; Wilson, James F.; Wright, Alan F.; Hovingh, G. Kees; Metspalu, Andres; Uusitupa, Matti; Magnusson, Patrik K. E.; Kyvik, Kirsten O.; Kaprio, Jaakko; Price, Jackie F.; Dedoussis, George V.; Deloukas, Panos; Meneton, Pierre; Lind, Lars; Boehnke, Michael; Shuldiner, Alan R.; van Duijn, Cornelia M.; Morris, Andrew D.; Toenjes, Anke; Peyser, Patricia A.; Beilby, John P.; Körner, Antje; Kuusisto, Johanna; Laakso, Markku; Bornstein, Stefan R.; Schwarz, Peter E. H.; Lakka, Timo A.; Rauramaa, Rainer; Adair, Linda S.; Smith, George Davey; Spector, Tim D.; Illig, Thomas; de Faire, Ulf; Hamsten, Anders; Gudnason, Vilmundur; Kivimaki, Mika; Hingorani, Aroon; Keinanen-Kiukaanniemi, Sirkka M.; Saaristo, Timo E.; Boomsma, Dorret I.; Stefansson, Kari; van der Harst, Pim; Dupuis, Josée; Pedersen, Nancy L.; Sattar, Naveed; Harris, Tamara B.; Cucca, Francesco; Ripatti, Samuli; Salomaa, Veikko; Mohlke, Karen L.; Balkau, Beverley; Froguel, Philippe; Pouta, Anneli; Jarvelin, Marjo-Riitta; Wareham, Nicholas J.; Bouatia-Naji, Nabila; McCarthy, Mark I.; Franks, Paul W.; Meigs, James B.; Teslovich, Tanya M.; Florez, Jose C.; Langenberg, Claudia; Ingelsson, Erik; Prokopenko, Inga; Barroso, Inês
2012-01-01
Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have increased the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes
Kremer, Y; Léger, J-F; Lapole, R; Honnorat, N; Candela, Y; Dieudonné, S; Bourdieu, L
2008-07-07
Acousto-optic deflectors (AOD) are promising ultrafast scanners for non-linear microscopy. Their use has been limited until now by their small scanning range and by the spatial and temporal dispersions of the laser beam going through the deflectors. We show that the use of AOD of large aperture (13mm) compared to standard deflectors allows accessing much larger field of view while minimizing spatio-temporal distortions. An acousto-optic modulator (AOM) placed at distance of the AOD is used to compensate spatial and temporal dispersions. Fine tuning of the AOM-AOD setup using a frequency-resolved optical gating (GRENOUILLE) allows elimination of pulse front tilt whereas spatial chirp is minimized thanks to the large aperture AOD.
Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.
2009-01-01
The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that
Newman, Lareen; Bidargaddi, Niranjan; Schrader, Geoffrey
2016-10-01
Despite evidence of benefits of telehealth networks in increasing access to, or providing, previously unavailable mental health services, care providers still prefer traditional approaches. For psychiatric assessment, digital technology can offer improvements over analog systems for the technical and, subsequently, the social quality of provider-client interaction. This is in turn expected to support greater provider uptake and enhanced patient benefits. Within the framework of Innovation Diffusion Theory, to study service providers' experiences of an existing regional telehealth network for mental health care practice twelve months after digitisation in order to identify the benefits of digital telehealth over an analog system for mental health care purposes in rural Australia. Qualitative interviews and focus groups were conducted with over 40 service providers from June to September 2013 in South Australia, ranging from the metropolitan central operations to health providers located up to 600km away in rural and remote areas of the same state. Participants included rural mental health teams, directors of nursing at rural hospitals, metropolitan-based psychiatrists and registrars, the metropolitan-based mental health team dedicated to rural provider support, rural GPs, administrative staff, and the executive group of the state rural health department. Fieldwork was conducted 12 months after the analog system was digitised. The interview and focus group data were analysed using thematic analysis, focusing on three key areas of innovation diffusion theory: relative advantage, technical complexity and technical compatibility. Five themes with 11 sub-themes were identified: (1) "Existing Uses", with three sub-themes: current mental health use, use by GPs, and use for staff support; (2) "Relative Advantage", with four sub-themes: improved technical quality, improved clinical practice, time and cost benefits for providers, and improved patient care; (3) "Technical
Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.
DEFF Research Database (Denmark)
Willems, Sara M; Wright, Daniel J.; Day, Felix R
2017-01-01
with involvement of psychomotor impairment (PEX14, LRPPRC and KANSL1). Mendelian randomization analyses are consistent with a causal effect of higher genetically predicted grip strength on lower fracture risk. In conclusion, our findings provide new biological insight into the mechanistic underpinnings of grip...... strength and the causal role of muscular strength in age-related morbidities and mortality....
Whiteley, Andrew R; Coombs, Jason A; Cembrola, Matthew; O'Donnell, Matthew J; Hudy, Mark; Nislow, Keith H; Letcher, Benjamin H
2015-07-01
The effective number of breeders that give rise to a cohort (N(b)) is a promising metric for genetic monitoring of species with overlapping generations; however, more work is needed to understand factors that contribute to variation in this measure in natural populations. We tested hypotheses related to interannual variation in N(b) in two long-term studies of brook trout populations. We found no supporting evidence for our initial hypothesis that N^(b) reflects N^(c) (defined as the number of adults in a population at the time of reproduction). N^(b) was stable relative to N^(C) and did not follow trends in abundance (one stream negative, the other positive). We used stream flow estimates to test the alternative hypothesis that environmental factors constrain N(b). We observed an intermediate optimum autumn stream flow for both N^(b) (R(2) = 0.73, P = 0.02) and full-sibling family evenness (R(2) = 0.77, P = 0.01) in one population and a negative correlation between autumn stream flow and full-sib family evenness in the other population (r = -0.95, P = 0.02). Evidence for greater reproductive skew at the lowest and highest autumn flow was consistent with suboptimal conditions at flow extremes. A series of additional tests provided no supporting evidence for a related hypothesis that density-dependent reproductive success was responsible for the lack of relationship between N(b) and N(C) (so-called genetic compensation). This work provides evidence that N(b) is a useful metric of population-specific individual reproductive contribution for genetic monitoring across populations and the link we provide between stream flow and N(b) could be used to help predict population resilience to environmental change. © 2015 John Wiley & Sons Ltd.
Scott, Robert A; Lagou, Vasiliki; Welch, Ryan P; Wheeler, Eleanor; Montasser, May E; Luan, Jian’an; Mägi, Reedik; Strawbridge, Rona J; Rehnberg, Emil; Gustafsson, Stefan; Kanoni, Stavroula; Rasmussen-Torvik, Laura J; Yengo, Loïc; Lecoeur, Cecile; Shungin, Dmitry; Sanna, Serena; Sidore, Carlo; Johnson, Paul C D; Jukema, J Wouter; Johnson, Toby; Mahajan, Anubha; Verweij, Niek; Thorleifsson, Gudmar; Hottenga, Jouke-Jan; Shah, Sonia; Smith, Albert V; Sennblad, Bengt; Gieger, Christian; Salo, Perttu; Perola, Markus; Timpson, Nicholas J; Evans, David M; Pourcain, Beate St; Wu, Ying; Andrews, Jeanette S; Hui, Jennie; Bielak, Lawrence F; Zhao, Wei; Horikoshi, Momoko; Navarro, Pau; Isaacs, Aaron; O’Connell, Jeffrey R; Stirrups, Kathleen; Vitart, Veronique; Hayward, Caroline; Esko, Tönu; Mihailov, Evelin; Fraser, Ross M; Fall, Tove; Voight, Benjamin F; Raychaudhuri, Soumya; Chen, Han; Lindgren, Cecilia M; Morris, Andrew P; Rayner, Nigel W; Robertson, Neil; Rybin, Denis; Liu, Ching-Ti; Beckmann, Jacques S; Willems, Sara M; Chines, Peter S; Jackson, Anne U; Kang, Hyun Min; Stringham, Heather M; Song, Kijoung; Tanaka, Toshiko; Peden, John F; Goel, Anuj; Hicks, Andrew A; An, Ping; Müller-Nurasyid, Martina; Franco-Cereceda, Anders; Folkersen, Lasse; Marullo, Letizia; Jansen, Hanneke; Oldehinkel, Albertine J; Bruinenberg, Marcel; Pankow, James S; North, Kari E; Forouhi, Nita G; Loos, Ruth J F; Edkins, Sarah; Varga, Tibor V; Hallmans, Göran; Oksa, Heikki; Antonella, Mulas; Nagaraja, Ramaiah; Trompet, Stella; Ford, Ian; Bakker, Stephan J L; Kong, Augustine; Kumari, Meena; Gigante, Bruna; Herder, Christian; Munroe, Patricia B; Caulfield, Mark; Antti, Jula; Mangino, Massimo; Small, Kerrin; Miljkovic, Iva; Liu, Yongmei; Atalay, Mustafa; Kiess, Wieland; James, Alan L; Rivadeneira, Fernando; Uitterlinden, Andre G; Palmer, Colin N A; Doney, Alex S F; Willemsen, Gonneke; Smit, Johannes H; Campbell, Susan; Polasek, Ozren; Bonnycastle, Lori L; Hercberg, Serge; Dimitriou, Maria; Bolton, Jennifer L; Fowkes, Gerard R; Kovacs, Peter; Lindström, Jaana; Zemunik, Tatijana; Bandinelli, Stefania; Wild, Sarah H; Basart, Hanneke V; Rathmann, Wolfgang; Grallert, Harald; Maerz, Winfried; Kleber, Marcus E; Boehm, Bernhard O; Peters, Annette; Pramstaller, Peter P; Province, Michael A; Borecki, Ingrid B; Hastie, Nicholas D; Rudan, Igor; Campbell, Harry; Watkins, Hugh; Farrall, Martin; Stumvoll, Michael; Ferrucci, Luigi; Waterworth, Dawn M; Bergman, Richard N; Collins, Francis S; Tuomilehto, Jaakko; Watanabe, Richard M; de Geus, Eco J C; Penninx, Brenda W; Hofman, Albert; Oostra, Ben A; Psaty, Bruce M; Vollenweider, Peter; Wilson, James F; Wright, Alan F; Hovingh, G Kees; Metspalu, Andres; Uusitupa, Matti; Magnusson, Patrik K E; Kyvik, Kirsten O; Kaprio, Jaakko; Price, Jackie F; Dedoussis, George V; Deloukas, Panos; Meneton, Pierre; Lind, Lars; Boehnke, Michael; Shuldiner, Alan R; van Duijn, Cornelia M; Morris, Andrew D; Toenjes, Anke; Peyser, Patricia A; Beilby, John P; Körner, Antje; Kuusisto, Johanna; Laakso, Markku; Bornstein, Stefan R; Schwarz, Peter E H; Lakka, Timo A; Rauramaa, Rainer; Adair, Linda S; Smith, George Davey; Spector, Tim D; Illig, Thomas; de Faire, Ulf; Hamsten, Anders; Gudnason, Vilmundur; Kivimaki, Mika; Hingorani, Aroon; Keinanen-Kiukaanniemi, Sirkka M; Saaristo, Timo E; Boomsma, Dorret I; Stefansson, Kari; van der Harst, Pim; Dupuis, Josée; Pedersen, Nancy L; Sattar, Naveed; Harris, Tamara B; Cucca, Francesco; Ripatti, Samuli; Salomaa, Veikko; Mohlke, Karen L; Balkau, Beverley; Froguel, Philippe; Pouta, Anneli; Jarvelin, Marjo-Riitta; Wareham, Nicholas J; Bouatia-Naji, Nabila; McCarthy, Mark I; Franks, Paul W; Meigs, James B; Teslovich, Tanya M; Florez, Jose C; Langenberg, Claudia; Ingelsson, Erik; Prokopenko, Inga; Barroso, Inês
2012-01-01
Through genome-wide association meta-analyses of up to 133,010 individuals of European ancestry without diabetes, including individuals newly genotyped using the Metabochip, we have raised the number of confirmed loci influencing glycemic traits to 53, of which 33 also increase type 2 diabetes risk (q fasting insulin showed association with lipid levels and fat distribution, suggesting impact on insulin resistance. Gene-based analyses identified further biologically plausible loci, suggesting that additional loci beyond those reaching genome-wide significance are likely to represent real associations. This conclusion is supported by an excess of directionally consistent and nominally significant signals between discovery and follow-up studies. Functional follow-up of these newly discovered loci will further improve our understanding of glycemic control. PMID:22885924
Fabre, Dominique; Singhal, Sunil; De Montpreville, Vincent; Decante, Benoit; Mussot, Sacha; Chataigner, Olivier; Mercier, Olaf; Kolb, Frederic; Dartevelle, Philippe G; Fadel, Elie
2009-07-01
Airway replacement after long-segment tracheal resection for benign and malignant disease remains a challenging problem because of the lack of a substitute conduit. Ideally, an airway substitute should be well vascularized, rigid, and autologous to avoid infections, airway stenosis, and the need for immunosuppression. We report the development of an autologous tracheal substitute for long-segment tracheal resection that satisfies these criteria and demonstrates excellent short-term functional results in a large-animal study. Twelve adult pigs underwent long-segment (6 cm, 60% of total length) tracheal resection. Autologous costal cartilage strips measuring 6 cm x 2 mm were harvested from the chest wall and inserted at regular 0.5-cm intervals between dermal layers of a cervical skin flap. The neotrachea was then scaffolded by rotating the composite cartilage skin flap around a silicone stent measuring 6 cm in length and 1.4 cm in diameter. The neotrachea replaced the long segment of tracheal resection, and the donor flap site was closed with a double-Z plasty. Animals were killed at 1 week (group I, n = 4), 2 weeks (group II, n = 4), and 5 weeks (group III, n = 4). In group III the stent was removed 1 week before death. Viability of the neotrachea was monitored by means of daily flexible bronchoscopy and histologic examination at autopsy. Long-term morbidity and mortality were determined by monitoring weight gain, respiratory distress, and survival. There was no mortality during the study period. Weight gain was appropriate in all animals. Daily bronchoscopy and postmortem histologic evaluation confirmed excellent viability of the neotrachea. There was no evidence of suture-line dehiscence. Five animals had distal granulomas that were removed by using rigid bronchoscopy. In group III 1 animal had tracheomalacia, which was successfully managed by means of insertion of a silicon stent. Airway reconstruction with autologous cervical skin flaps scaffolded with costal
Directory of Open Access Journals (Sweden)
Yuguo Tao
2016-01-01
Full Text Available Carrier-selective contact with low minority carrier recombination and efficient majority carrier transport is mandatory to eliminate metal-induced recombination for higher energy conversion efficiency for silicon (Si solar cells. In the present study, the carrier-selective contact consists of an ultra-thin tunnel oxide and a phosphorus-doped polycrystalline Si (poly-Si thin film formed by plasma enhanced chemical vapor deposition (PECVD and subsequent thermal crystallization. It is shown that the poly-Si film properties (doping level, crystallization and dopant activation anneal temperature are crucial for achieving excellent contact passivation quality. It is also demonstrated quantitatively that the tunnel oxide plays a critical role in this tunnel oxide passivated contact (TOPCON scheme to realize desired carrier selectivity. Presence of tunnel oxide increases the implied Voc (iVoc by ~ 125 mV. The iVoc value as high as 728 mV is achieved on symmetric structure with TOPCON on both sides. Large area (239 cm2 n-type Czochralski (Cz Si solar cells are fabricated with homogeneous implanted boron emitter and screen-printed contact on the front and TOPCON on the back, achieving 21.2% cell efficiency. Detailed analysis shows that the performance of these cells is mainly limited by boron emitter recombination on the front side.
Large negative thermal expansion provided by metal-organic framework MOF-5: A first-principles study
International Nuclear Information System (INIS)
Wang, Lei; Wang, Cong; Sun, Ying; Shi, Kewen; Deng, Sihao; Lu, Huiqing
2016-01-01
The thermodynamic properties and negative thermal expansion (NTE) behavior of metal-organic framework MOF-5 are investigated within the quasi-harmonic approximation, by using density functional theory. For nanoporous MOF-5, the temperature dependence of bulk modulus increases with increasing temperature, indicating that the resistance to compression is enhanced gradually. The large NTE behavior is obtained, which agrees reasonably with the experimental data. From the Grüneisen parameter as a function of temperature, it can be found that low-frequency phonons are closely associated with the NTE of MOF-5. The corresponding vibrational modes can be viewed as the results of local deformations (translation, rotation, twisting) of BDC (1,4-benzenedicarboxylate) linker and zinc clusters. The lowest-frequency phonon mode (the transverse motion of carboxylate groups and benzene ring, zinc clusters being as rigid units) is confirmed to be most responsible for thermal contraction. - Highlights: • The related thermodynamic properties and NTE behavior of MOF-5 are investigated by first principles. • Contrary to other inorganic NTE materials, bulk modulus of MOF-5 increases on heating. • The low-frequency phonons are closely associated with the NTE of MOF-5. • The NTE-contributing vibrational modes are elucidated clearly.
Large negative thermal expansion provided by metal-organic framework MOF-5: A first-principles study
Energy Technology Data Exchange (ETDEWEB)
Wang, Lei, E-mail: leiw@buaa.edu.cn; Wang, Cong, E-mail: congwang@buaa.edu.cn; Sun, Ying; Shi, Kewen; Deng, Sihao; Lu, Huiqing
2016-06-01
The thermodynamic properties and negative thermal expansion (NTE) behavior of metal-organic framework MOF-5 are investigated within the quasi-harmonic approximation, by using density functional theory. For nanoporous MOF-5, the temperature dependence of bulk modulus increases with increasing temperature, indicating that the resistance to compression is enhanced gradually. The large NTE behavior is obtained, which agrees reasonably with the experimental data. From the Grüneisen parameter as a function of temperature, it can be found that low-frequency phonons are closely associated with the NTE of MOF-5. The corresponding vibrational modes can be viewed as the results of local deformations (translation, rotation, twisting) of BDC (1,4-benzenedicarboxylate) linker and zinc clusters. The lowest-frequency phonon mode (the transverse motion of carboxylate groups and benzene ring, zinc clusters being as rigid units) is confirmed to be most responsible for thermal contraction. - Highlights: • The related thermodynamic properties and NTE behavior of MOF-5 are investigated by first principles. • Contrary to other inorganic NTE materials, bulk modulus of MOF-5 increases on heating. • The low-frequency phonons are closely associated with the NTE of MOF-5. • The NTE-contributing vibrational modes are elucidated clearly.
International Nuclear Information System (INIS)
Peletier, Mark A.; Redig, Frank; Vafayi, Kiamars
2014-01-01
We consider three one-dimensional continuous-time Markov processes on a lattice, each of which models the conduction of heat: the family of Brownian Energy Processes with parameter m (BEP(m)), a Generalized Brownian Energy Process, and the Kipnis-Marchioro-Presutti (KMP) process. The hydrodynamic limit of each of these three processes is a parabolic equation, the linear heat equation in the case of the BEP(m) and the KMP, and a nonlinear heat equation for the Generalized Brownian Energy Process with parameter a (GBEP(a)). We prove the hydrodynamic limit rigorously for the BEP(m), and give a formal derivation for the GBEP(a). We then formally derive the pathwise large-deviation rate functional for the empirical measure of the three processes. These rate functionals imply gradient-flow structures for the limiting linear and nonlinear heat equations. We contrast these gradient-flow structures with those for processes describing the diffusion of mass, most importantly the class of Wasserstein gradient-flow systems. The linear and nonlinear heat-equation gradient-flow structures are each driven by entropy terms of the form −log ρ; they involve dissipation or mobility terms of order ρ 2 for the linear heat equation, and a nonlinear function of ρ for the nonlinear heat equation
Faulhaber, Anja K; Dittmer, Anke; Blind, Felix; Wächter, Maximilian A; Timm, Silja; Sütfeld, Leon R; Stephan, Achim; Pipa, Gordon; König, Peter
2018-01-22
Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in utilitarian ways, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past few years, especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as drivers in virtual reality environments. Participants had to make decisions between driving in one of two lanes where different obstacles came into view. Eventually, the participants had to decide which of the objects they would crash into. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, the influence of sidewalks as potential safe harbors and a condition implicating self-sacrifice were tested. Results showed that participants, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence by the other variables. Derived from these findings, which are in line with the utilitarian approach in moral decision making, it will be argued for an obligatory ethics setting implemented in ADVs.
Directory of Open Access Journals (Sweden)
Joshua M Pevnick
Full Text Available Personal fitness trackers (PFT have substantial potential to improve healthcare.To quantify and characterize early adopters who shared their PFT data with providers.We used bivariate statistics and logistic regression to compare patients who shared any PFT data vs. patients who did not.A patient portal was used to invite 79,953 registered portal users to share their data. Of 66,105 users included in our analysis, 499 (0.8% uploaded data during an initial 37-day study period. Bivariate and regression analysis showed that early adopters were more likely than non-adopters to be younger, male, white, health system employees, and to have higher BMIs. Neither comorbidities nor utilization predicted adoption.Our results demonstrate that patients had little intrinsic desire to share PFT data with their providers, and suggest that patients most at risk for poor health outcomes are least likely to share PFT data. Marketing, incentives, and/or cultural change may be needed to induce such data-sharing.
Fiedler, Klaus; Kareev, Yaakov
2006-01-01
Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…
Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3
Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for
Maekawa, S.; Ankersmit, Bart; Neuhaus, E.; Schellen, H.L.; Beltran, V.; Boersma, F.; Padfield, T.; Borchersen, K.
2007-01-01
Our Lord in the Attic is a historic house museum located in the historic center of Amsterdam, The Netherlands. It is a typical 17th century Dutch canal house, with a hidden Church in the attic. The Church was used regularly until 1887 when the house became a museum. The annual total number of
Rhodes, Jonathan R.; Lunney, Daniel; Callaghan, John; McAlpine, Clive A.
2014-01-01
Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity. PMID:24646891
Directory of Open Access Journals (Sweden)
Jonathan R Rhodes
Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.
Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A
2014-01-01
Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.
Czech Academy of Sciences Publication Activity Database
Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.
2017-01-01
Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016
Milner, Rachel; Parrish, Jonathan; Wright, Adrienne; Gnarpe, Judy; Keenan, Louanne
2015-01-01
In a large-enrollment, introductory biochemistry course for nonmajors, the authors provide students with formative feedback through practice questions in PDF format. Recently, they investigated possible benefits of providing the practice questions via an online game (Brainspan). Participants were randomly assigned to either the online game group…
Patel, Nitin R; Lind, Jason D; Antinori, Nicole
2015-01-01
Background Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users’ experiences in using secure email messaging. Objective To quantitatively assess veteran patients’ experiences in using secure email messaging in a large patient sample. Methods A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations’ Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. Results The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy
Haun, Jolie N; Patel, Nitin R; Lind, Jason D; Antinori, Nicole
2015-12-21
Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users' experiences in using secure email messaging. To quantitatively assess veteran patients' experiences in using secure email messaging in a large patient sample. A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations' Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy-to-use (P≤.001) communication tool, compared
Directory of Open Access Journals (Sweden)
Shuo Zhang
2017-04-01
Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.
Ágg, Bence; Meienberg, Janine; Kopps, Anna M.; Fattorini, Nathalie; Stengl, Roland; Daradics, Noémi; Pólos, Miklós; Bors, András; Radovits, Tamás; Merkely, Béla; De Backer, Julie; Szabolcs, Zoltán; Mátyás, Gábor
2018-01-01
Copy number variations (CNVs) comprise about 10% of reported disease-causing mutations in Mendelian disorders. Nevertheless, pathogenic CNVs may have been under-detected due to the lack or insufficient use of appropriate detection methods. In this report, on the example of the diagnostic odyssey of a patient with Marfan syndrome (MFS) harboring a hitherto unreported 32-kb FBN1 deletion, we highlight the need for and the feasibility of testing for CNVs (>1 kb) in Mendelian disorders in the current next-generation sequencing (NGS) era. PMID:29850152
Kozitskiy, Sergey
2018-05-01
Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.
Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.
2017-11-01
A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.
International Nuclear Information System (INIS)
Luner, S.J.
1978-01-01
A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day
How to implement a quantum algorithm on a large number of qubits by controlling one central qubit
Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco
2010-03-01
It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).
Yano, T.; Nishino, K.; Kawamura, H.; Ueno, I.; Matsumoto, S.
2015-02-01
This paper reports the experimental results on the instability and associated roll structures (RSs) of Marangoni convection in liquid bridges formed under the microgravity environment on the International Space Station. The geometry of interest is high aspect ratio (AR = height/diameter ≥ 1.0) liquid bridges of high Prandtl number fluids (Pr = 67 and 207) suspended between coaxial disks heated differentially. The unsteady flow field and associated RSs were revealed with the three-dimensional particle tracking velocimetry. It is found that the flow field after the onset of instability exhibits oscillations with azimuthal mode number m = 1 and associated RSs traveling in the axial direction. The RSs travel in the same direction as the surface flow (co-flow direction) for 1.00 ≤ AR ≤ 1.25 while they travel in the opposite direction (counter-flow direction) for AR ≥ 1.50, thus showing the change of traveling directions with AR. This traveling direction for AR ≥ 1.50 is reversed to the co-flow direction when the temperature difference between the disks is increased to the condition far beyond the critical one. This change of traveling directions is accompanied by the increase of the oscillation frequency. The characteristics of the RSs for AR ≥ 1.50, such as the azimuthal mode of oscillation, the dimensionless oscillation frequency, and the traveling direction, are in reasonable agreement with those of the previous sounding rocket experiment for AR = 2.50 and those of the linear stability analysis of an infinite liquid bridge.
Energy Technology Data Exchange (ETDEWEB)
Han, Wang [Technical Univ. of Darmstadt (Germany); Wang, Haiou [Univ. of New South Wales, Sydney, NSW (Australia); Kuenne, Guido [Technical Univ. of Darmstadt (Germany); Hawkes, Evatt R. [Univ. of New South Wales, Sydney, NSW (Australia); Chen, Jacqueline H. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Janicka, Johannes [Technical Univ. of Darmstadt (Germany); Hasse, Christian [Technical Univ. of Darmstadt (Germany)
2017-12-01
This supplementary material complements the article and provides additional information to the chemical mechanism used in this work, boundary conditions for the LES con guration and table generation, comparisons of axial velocities, results from a LES/ nite-rate chemistry (FRC) approach, and results from the LES/DTF/SPF approach with a particular chemistry table that is generated using a single strained premixed amelet solution.
A LARGE NUMBER OF z > 6 GALAXIES AROUND A QSO AT z = 6.43: EVIDENCE FOR A PROTOCLUSTER?
International Nuclear Information System (INIS)
Utsumi, Yousuke; Kashikawa, Nobunari; Miyazaki, Satoshi; Komiyama, Yutaka; Goto, Tomotsugu; Furusawa, Hisanori; Overzier, Roderik
2010-01-01
QSOs have been thought to be important for tracing highly biased regions in the early universe, from which the present-day massive galaxies and galaxy clusters formed. While overdensities of star-forming galaxies have been found around QSOs at 2 6 is less clear. Previous studies with the Hubble Space Telescope (HST) have reported the detection of small excesses of faint dropout galaxies in some QSO fields, but these surveys probed a relatively small region surrounding the QSOs. To overcome this problem, we have observed the most distant QSO at z = 6.4 using the large field of view of the Suprime-Cam (34' x 27'). Newly installed red-sensitive fully depleted CCDs allowed us to select Lyman break galaxies (LBGs) at z ∼ 6.4 more efficiently. We found seven LBGs in the QSO field, whereas only one exists in a comparison field. The significance of this apparent excess is difficult to quantify without spectroscopic confirmation and additional control fields. The Poisson probability to find seven objects when one expects four is ∼10%, while the probability to find seven objects in one field and only one in the other is less than 0.4%, suggesting that the QSO field is significantly overdense relative to the control field. These conclusions are supported by a comparison with a cosmological smoothed particle hydrodynamics simulation which includes the higher order clustering of galaxies. We find some evidence that the LBGs are distributed in a ring-like shape centered on the QSO with a radius of ∼3 Mpc. There are no candidate LBGs within 2 Mpc from the QSO, i.e., galaxies are clustered around the QSO but appear to avoid the very center. These results suggest that the QSO is embedded in an overdense region when defined on a sufficiently large scale (i.e., larger than an HST/ACS pointing). This suggests that the QSO was indeed born in a massive halo. The central deficit of galaxies may indicate that (1) the strong UV radiation from the QSO suppressed galaxy formation in
Qu, Long; Nettleton, Dan; Dekkers, Jack C M
2012-12-01
Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.
Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.
1996-01-01
This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.
Fu, Rao; Gong, Jun
2017-11-01
Ribosomal (r)RNA and rDNA have been golden molecular markers in microbial ecology. However, it remains poorly understood how ribotype copy number (CN)-based characteristics are linked with diversity, abundance, and activity of protist populations and communities observed at organismal levels. Here, we applied a single-cell approach to quantify ribotype CNs in two ciliate species reared at different temperatures. We found that in actively growing cells, the per-cell rDNA and rRNA CNs scaled with cell volume (CV) to 0.44 and 0.58 powers, respectively. The modeled rDNA and rRNA concentrations thus appear to be much higher in smaller than in larger cells. The observed rRNA:rDNA ratio scaled with CV 0.14 . The maximum growth rate could be well predicted by a combination of per-cell ribotype CN and temperature. Our empirical data and modeling on single-cell ribotype scaling are in agreement with both the metabolic theory of ecology and the growth rate hypothesis, providing a quantitative framework for linking cellular rDNA and rRNA CNs with body size, growth (activity), and biomass stoichiometry. This study also demonstrates that the expression rate of rRNA genes is constrained by cell size, and favors biomass rather than abundance-based interpretation of quantitative ribotype data in population and community ecology of protists. © 2017 The Authors. Journal of Eukaryotic Microbiology published by Wiley Periodicals, Inc. on behalf of International Society of Protistologists.
De Leon, Samantha; Connelly-Flores, Alison; Mostashari, Farzad; Shih, Sarah C
2010-01-01
Electronic health records (EHRs) are expected to transform and improve the way medicine is practiced. However, providers perceive many barriers toward implementing new health information technology. Specifically, they are most concerned about the potentially negative impact on their practice finances and productivity. This study compares the productivity of 75 providers at a large urban primary care practice from January 2005 to February 2009, before and after implementing an EHR system, using longitudinal mixed model analyses. While decreases in productivity were observed at the time the EHR system was implemented, most providers quickly recovered, showing increases in productivity per month shortly after EHR implementation. Overall, providers had significant productivity increases of 1.7% per month per provider from pre- to post-EHR adoption. The majority of the productivity gains occurred after the practice instituted a pay-for-performance program, enabled by the data capture of the EHRs. Coupled with pay-for-performance, EHRs can spur rapid gains in provider productivity.
2014-01-01
Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239
International Nuclear Information System (INIS)
Hasegawa, K.; Lim, C.S.; Ogure, K.
2003-01-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-09-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-01-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Directory of Open Access Journals (Sweden)
Nori Matsunami
Full Text Available Structural variation is thought to play a major etiological role in the development of autism spectrum disorders (ASDs, and numerous studies documenting the relevance of copy number variants (CNVs in ASD have been published since 2006. To determine if large ASD families harbor high-impact CNVs that may have broader impact in the general ASD population, we used the Affymetrix genome-wide human SNP array 6.0 to identify 153 putative autism-specific CNVs present in 55 individuals with ASD from 9 multiplex ASD pedigrees. To evaluate the actual prevalence of these CNVs as well as 185 CNVs reportedly associated with ASD from published studies many of which are insufficiently powered, we designed a custom Illumina array and used it to interrogate these CNVs in 3,000 ASD cases and 6,000 controls. Additional single nucleotide variants (SNVs on the array identified 25 CNVs that we did not detect in our family studies at the standard SNP array resolution. After molecular validation, our results demonstrated that 15 CNVs identified in high-risk ASD families also were found in two or more ASD cases with odds ratios greater than 2.0, strengthening their support as ASD risk variants. In addition, of the 25 CNVs identified using SNV probes on our custom array, 9 also had odds ratios greater than 2.0, suggesting that these CNVs also are ASD risk variants. Eighteen of the validated CNVs have not been reported previously in individuals with ASD and three have only been observed once. Finally, we confirmed the association of 31 of 185 published ASD-associated CNVs in our dataset with odds ratios greater than 2.0, suggesting they may be of clinical relevance in the evaluation of children with ASDs. Taken together, these data provide strong support for the existence and application of high-impact CNVs in the clinical genetic evaluation of children with ASD.
Ph.H.B.F. Franses (Philip Hans); P.C. Verhoef (Peter); J.C. Hoekstra (Janny)
2002-01-01
textabstractThe authors examine the effect of relational constructs (e.g., satisfaction, trust, and affective and calculative commitment) on customer referrals and the number of services purchased, as well as the moderating effect of age of the relationship on these relationships. The research
Directory of Open Access Journals (Sweden)
Guilherme Mourão
2010-10-01
Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.
Directory of Open Access Journals (Sweden)
Barbara Castelnuovo
Full Text Available INTRODUCTION: Starting in June 2010 the Infectious Diseases Institute (IDI clinic (a large urban HIV out-patient facility switched to provider-based Electronic Medical Records (EMR from paper EMR entered in the database by data-entry clerks. Standardized clinics forms were eliminated but providers still fill free text clinical notes in physical patients' files. The objective of this study was to compare the rate of errors in the database before and after the introduction of the provider-based EMR. METHODS AND FINDINGS: Data in the database pre and post provider-based EMR was compared with the information in the patients' files and classified as correct, incorrect, and missing. We calculated the proportion of incorrect, missing and total error for key variables (toxicities, opportunistic infections, reasons for treatment change and interruption. Proportions of total errors were compared using chi-square test. A survey of the users of the EMR was also conducted. We compared data from 2,382 visits (from 100 individuals of a retrospective validation conducted in 2007 with 34,957 visits (from 10,920 individuals of a prospective validation conducted in April-August 2011. The total proportion of errors decreased from 66.5% in 2007 to 2.1% in 2011 for opportunistic infections, from 51.9% to 3.5% for ART toxicity, from 82.8% to 12.5% for reasons for ART interruption and from 94.1% to 0.9% for reasons for ART switch (all P<0.0001. The survey showed that 83% of the providers agreed that provider-based EMR led to improvement of clinical care, 80% reported improved access to patients' records, and 80% appreciated the automation of providers' tasks. CONCLUSIONS: The introduction of provider-based EMR improved the quality of data collected with a significant reduction in missing and incorrect information. The majority of providers and clients expressed satisfaction with the new system. We recommend the use of provider-based EMR in large HIV programs in Sub
Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.
2014-01-01
The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At
Directory of Open Access Journals (Sweden)
TRIFINA, L.
2011-02-01
Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.
Directory of Open Access Journals (Sweden)
Débora Jardim-Messeder
2017-12-01
Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal
International Nuclear Information System (INIS)
Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.
2004-01-01
The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms
Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al
2013-12-11
A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.
Bennett, Ruth, Ed.; And Others
An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…
Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong
2016-12-01
To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)
2015-06-15
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.
International Nuclear Information System (INIS)
Chiu, J; Ma, L
2015-01-01
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume
Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.
2017-05-01
We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.
Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.
Directory of Open Access Journals (Sweden)
Mohsen Champour
2015-09-01
Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284
Makki, Behrooz
2016-03-22
This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.
Directory of Open Access Journals (Sweden)
Karsten Laursen
Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.
Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.
2014-01-01
The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall
Directory of Open Access Journals (Sweden)
Cigudosa Juan C
2011-05-01
Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.
Directory of Open Access Journals (Sweden)
Chuanchuan Xie
2017-01-01
Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.
Directory of Open Access Journals (Sweden)
Anna Twardosz
2011-04-01
Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result
Wang, Fei; Chen, Quanjiao; Li, Shuntang; Zhang, Chenyao; Li, Shanshan; Liu, Min; Mei, Kun; Li, Chunhua; Ma, Lixin; Yu, Xiaolan
2017-06-01
Linear DNA vaccines provide effective vaccination. However, their application is limited by high cost and small scale of the conventional polymerase chain reaction (PCR) generally used to obtain sufficient amounts of DNA effective against epidemic diseases. In this study, a two-step, large-scale PCR was established using a low-cost DNA polymerase, RKOD, expressed in Pichia pastoris. Two linear DNA vaccines encoding influenza H1N1 hemagglutinin (HA) 1, LEC-HA, and PTO-LEC-HA (with phosphorothioate-modified primers), were produced by the two-step PCR. Protective effects of the vaccines were evaluated in a mouse model. BALB/c mice were immunized three times with the vaccines or a control DNA fragment. All immunized animals were challenged by intranasal administration of a lethal dose of influenza H1N1 virus 2 weeks after the last immunization. Sera of the immunized animals were tested for the presence of HA-specific antibodies, and the total IFN-γ responses induced by linear DNA vaccines were measured. The results showed that the DNA vaccines but not the control DNA induced strong antibody and IFN-γ responses. Additionally, the PTO-LEC-HA vaccine effectively protected the mice against the lethal homologous mouse-adapted virus, with a survival rate of 100% versus 70% in the LEC-HA-vaccinated group, showing that the PTO-LEC-HA vaccine was more effective than LEC-HA. In conclusion, the results indicated that the linear H1N1 HA-coding DNA vaccines induced significant immune responses and protected mice against a lethal virus challenge. Thus, the low-cost, two-step, large-scale PCR can be considered a potential tool for rapid manufacturing of linear DNA vaccines against emerging infectious diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
Whitaker, Katherine E.; van Dokkum, Pieter G.; Brammer, Gabriel; Momcheva, Ivelina G.; Skelton, Rosalind; Franx, Marijn; Kriek, Mariska; Labbé, Ivo; Fumagalli, Mattia; Lundgren, Britt F.; Nelson, Erica J.; Patel, Shannon G.; Rix, Hans-Walter
2013-06-01
Quiescent galaxies at z ~ 2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically quiescent galaxies at 1.4 < z < 2.2 from the 3D-HST grism survey. In addition to Hβ (λ4861 Å), we unambiguously identify metal absorption lines in the stacked spectrum, including the G band (λ4304 Å), Mg I (λ5175 Å), and Na I (λ5894 Å). This finding demonstrates that galaxies with relatively old stellar populations already existed when the universe was ~3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3^{+0.1}_{-0.3} Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: the reddest 80% of galaxies are dominated by metal lines and have a relatively old mean age of 1.6^{+0.5}_{-0.4} Gyr, whereas the bluest (and brightest) galaxies have strong Balmer lines and a spectroscopic age of 0.9^{+0.2}_{-0.1} Gyr. Although the spectrum is dominated by an evolved stellar population, we also find [O III] and Hβ emission. Interestingly, this emission is more centrally concentrated than the continuum with {L_{{O}\\,\\scriptsize{III}}}=1.7+/- 0.3\\times 10^{40} erg s-1, indicating residual central star formation or nuclear activity.
Carlson, H. W.
1979-01-01
A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.
Lind, Mads V; Savolainen, Otto I; Ross, Alastair B
2016-08-01
Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.
Stephan, Carl N
2014-03-01
By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.
Directory of Open Access Journals (Sweden)
Chunrong Mi
2017-01-01
Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia
2017-01-01
Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane ( Grus monacha , n = 33), White-naped Crane ( Grus vipio , n = 40), and Black-necked Crane ( Grus nigricollis , n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Andrews, George E
1994-01-01
Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl
Schenk, Janina; Hohberg, Karin; Helder, Hans; Ristau, Kai; Traunspurger, Walter
2017-01-01
Reliable and well-developed DNA barcode databases are indispensable for the identification of microscopic life. However, effectiveness of molecular barcoding in identifying terrestrial specimens, and nematodes in particular, has received little attention. In this study, ca 600 ribosomal large
Directory of Open Access Journals (Sweden)
Kelsie H. Okamura
2018-01-01
Full Text Available ObjectivePublic-sector behavioral health systems seeking to implement evidence-based treatments (EBTs may face challenges selecting EBTs given their limited resources. This study describes and illustrates one method to calculate cost related to training and consultation to assist system-level decisions about which EBTs to select.MethodsTraining, consultation, and indirect labor costs were calculated for seven commonly implemented EBTs. Using extant literature, we then estimated the diagnoses and populations for which each EBT was indicated. Diagnostic and demographic information from Medicaid claims data were obtained from a large behavioral health payer organization and used to estimate the number of covered people with whom the EBT could be used and to calculate implementation-associated costs per consumer.ResultsFindings suggest substantial cost to therapists and service systems related to EBT training and consultation. Training and consultation costs varied by EBT, from Dialectical Behavior Therapy at $238.07 to Cognitive Behavioral Therapy at $0.18 per potential consumer served. Total cost did not correspond with the number of prospective consumers served by an EBT.ConclusionA cost-metric that accounts for the prospective recipients of a given EBT within a given population may provide insight into how systems should prioritize training efforts. Future policy should consider the financial burden of EBT implementation in relation to the context of the population being served and begin a dialog in creating incentives for EBT use.
International Nuclear Information System (INIS)
Hey, J D
2014-01-01
As a sequel to an earlier study (Hey 2009 J. Phys. B: At. Mol. Opt. Phys. 42 125701), we consider further the application of the line strength formula derived by Watson (2006 J. Phys. B: At. Mol. Opt. Phys. 39 L291) to transitions arising from states of very high principal quantum number in hydrogenic atoms and ions (Rydberg–Rydberg transitions, n > 1000). It is shown how apparent difficulties associated with the use of recurrence relations, derived (Hey 2006 J. Phys. B: At. Mol. Opt. Phys. 39 2641) by the ladder operator technique of Infeld and Hull (1951 Rev. Mod. Phys. 23 21), may be eliminated by a very simple numerical device, whereby this method may readily be applied up to n ≈ 10 000. Beyond this range, programming of the method may entail greater care and complexity. The use of the numerically efficient McLean–Watson formula for such cases is again illustrated by the determination of radiative lifetimes and comparison of present results with those from an asymptotic formula. The question of the influence on the results of the omission or inclusion of fine structure is considered by comparison with calculations based on the standard Condon–Shortley line strength formula. Interest in this work on the radial matrix elements for large n and n′ is related to measurements of radio recombination lines from tenuous space plasmas, e.g. Stepkin et al (2007 Mon. Not. R. Astron. Soc. 374 852), Bell et al (2011 Astrophys. Space Sci. 333 377), to the calculation of electron impact broadening parameters for such spectra (Watson 2006 J. Phys. B: At. Mol. Opt. Phys. 39 1889) and comparison with other theoretical methods (Peach 2014 Adv. Space Res. in press), to the modelling of physical processes in H II regions (Roshi et al 2012 Astrophys. J. 749 49), and the evaluation bound–bound transitions from states of high n during primordial cosmological recombination (Grin and Hirata 2010 Phys. Rev. D 81 083005, Ali-Haïmoud and Hirata 2010 Phys. Rev. D 82 063521
Tajalizadehkhoob, S.
2018-01-01
In theory, hosting providers can play an important role in fighting cybercrime and misuse. This is because many online threats, be they high-profile or mundane, use online storage infrastructure maintained by hosting providers at the core of their criminal operations.
However, in practice, we
J. Vogt (Julia); K. Bengesser (Kathrin); K.B.M. Claes (Kathleen B.M.); K. Wimmer (Katharina); V.-F. Mautner (Victor-Felix); R. van Minkelen (Rick); E. Legius (Eric); H. Brems (Hilde); M. Upadhyaya (Meena); J. Högel (Josef); C. Lazaro (Conxi); T. Rosenbaum (Thorsten); S. Bammert (Simone); L. Messiaen (Ludwine); D.N. Cooper (David); H. Kehrer-Sawatzki (Hildegard)
2014-01-01
textabstractBackground: Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The
International Nuclear Information System (INIS)
Dalton, G.R.; Tulenko, J.S.; Zhou, X.
1990-01-01
The University of Florida is part of a multiuniversity research effort, sponsored by the US Department of Energy which is under way to develop and deploy an advanced semi-autonomous robotic system for use in nuclear power stations. This paper reports on the development of the computer tools necessary to gain convenient graphic access to the intelligence implicit in a large complex data base such as that in a nuclear reactor plant. This program is integrated as a man/machine interface within the larger context of the total computerized robotic planning and control system. The portion of the project described here addresses the connection between the three-dimensional displays on an interactive graphic workstation and a data-base computer running a large data-base server program. Programming the two computers to work together to accept graphic queries and return answers on the graphic workstation is a key part of the interactive capability developed
Weiss, Stephan; Wei, Ping; Ahlers, Guenter
2015-11-01
Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.
Directory of Open Access Journals (Sweden)
Theodore M. Porter
2012-12-01
Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.
Murty, M Ram
2014-01-01
This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.
Barnes, John
2016-01-01
In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...
DEFF Research Database (Denmark)
Imam, Ayesha; Jin, Guang; Sillesen, Martin
2015-01-01
Abstract We have previously shown that early treatment with fresh frozen plasma (FFP) is neuroprotective in a swine model of hemorrhagic shock (HS) and traumatic brain injury (TBI). However, it remains unknown whether this strategy would be beneficial in a more clinical polytrauma model. Yorkshire...... as well as cerebral perfusion pressures. Levels of cerebral eNOS were higher in the FFP-treated group (852.9 vs. 816.4 ng/mL; p=0.03), but no differences in brain levels of ET-1 were observed. Early administration of FFP is neuroprotective in a complex, large animal model of polytrauma, hemorrhage...
Directory of Open Access Journals (Sweden)
Yang Ding
2016-09-01
Full Text Available Like mammals, fish possess an interferon regulatory factor 3 (IRF3/IRF7-dependent type I IFN responses, but the exact mechanism by which IRF3/IRF7 regulate the type I IFNs remains largely unknown. In this study, we identified two type I IFNs in the Perciforme fish large yellow croaker Larimichthys crocea, one of which belongs to the fish IFNd subgroup, and the other is assigned to a novel subgroup of group I IFNs in fish, tentatively termed IFNh. The two IFN genes are constitutively expressed in all examined tissues, but with varied expression levels. Both IFN genes can be rapidly induced in head kidney and spleen tissues by polyinosinic-polycytidylic acid. The recombinant IFNh was shown to be more potent to trigger a rapid induction of the antiviral genes MxA and PKR than the IFNd, suggesting that they may play distinct roles in regulating early antiviral immunity. Strikingly, IFNd, but not IFNh, could induce the gene expression of itself and IFNh through a positive feedback loop mediated by the IFNd-dependent activation of IRF3 and IRF7. Furthermore, our data demonstrate that the induction of IFNd can be enhanced by the dimeric formation of IRF3 and IRF7, while the IFNh expression mainly involves IRF3. Taken together, our findings demonstrate that the IFN responses are diverse in fish and are likely to be regulated by distinct mechanisms.
DEFF Research Database (Denmark)
Ingason, Andrés; Giegling, Ina; Cichon, Sven
2010-01-01
The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample....... Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both...... as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia....
International Nuclear Information System (INIS)
Puntambekar, A.M.; Karmarkar, M.G.
2003-01-01
Superconducting (Sc) spool correctors of different types namely Sextupole, (MCS) Decapole (MCD) and Octupole (MCO) are incorporated in each of the main dipole of Large Hadron Collider (LHC). In all 2464 MCS and 1232 MCDO magnets are required to equip all 1232 Dipoles of LHC. The coils wound from thin rectangular section Sc wires are the heart of magnet assembly and its performance for the field quality and cold quench training largely depends on the precise and robust construction of these coils. Under DAE-CERN collaboration CAT was entrusted with the responsibility of making these magnets for LHC. Starting with development of manual fixtures and prototyping using soldering, a more advances special Automatic Coils Winding and Ultrasonic Welding (USW) system for production of large no. of coils and magnets were built at CAT. The paper briefly describes the various developments in this area. (author)
Energy Technology Data Exchange (ETDEWEB)
Cooperstein, G; Mosher, D; Stephanakis, S J; Weber, B V; Young, F C [Naval Research Laboratory, Washington, DC (United States); Swanekamp, S B [JAYCOR, Vienna, VA (United States)
1997-12-31
Backscattered electrons from anodes with high-atomic-number substrates cause early-time anode-plasma formation from the surface layer leading to faster, more intense electron beam pinching, and lower diode impedance. A simple derivation of Child-Langmuir current from a thin hollow cathode shows the same dependence on the diode aspect ratio as critical current. Using this fact, it is shown that the diode voltage and current follow relativistic Child-Langmuir theory until the anode plasma is formed, and then follows critical current after the beam pinches. With thin hollow cathodes, electron beam pinching can be suppressed at low voltages (< 800 kV) even for high currents and high-atomic-number anodes. Electron beam pinching can also be suppressed at high voltages for low-atomic-number anodes as long as the electron current densities remain below the plasma turn-on threshold. (author). 8 figs., 2 refs.
DEFF Research Database (Denmark)
Hansen, S K; Gjesing, A P; Rasmussen, S K
2004-01-01
The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose-induced ins......The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose...
Ancellet, Gerard; Daskalakis, Nikos; Raut, Jean Christophe; Quennehen, Boris; Ravetta, Francois; Hair, Jonathan; Tarasick, David; Schlager, Hans; Weinheimer, Andrew J.; Thompson, Anne M.;
2016-01-01
The goal of the paper are to: (1) present tropospheric ozone (O3) climatologies in summer 2008 based on a large amount of measurements, during the International Polar Year when the Polar Study using Aircraft, Remote Sensing, Surface Measurements, and Models of Climate Chemistry, Aerosols, and Transport (POLARCAT) campaigns were conducted (2) investigate the processes that determine O3 concentrations in two different regions (Canada and Greenland) that were thoroughly studied using measurements from 3 aircraft and 7 ozonesonde stations. This paper provides an integrated analysis of these observations and the discussion of the latitudinal and vertical variability of tropospheric ozone north of 55oN during this period is performed using a regional model (WFR-Chem). Ozone, CO and potential vorticity (PV) distributions are extracted from the simulation at the measurement locations. The model is able to reproduce the O3 latitudinal and vertical variability but a negative O3 bias of 6-15 ppbv is found in the free troposphere over 4 km, especially over Canada. Ozone average concentrations are of the order of 65 ppbv at altitudes above 4 km both over Canada and Greenland, while they are less than 50 ppbv in the lower troposphere. The relative influence of stratosphere-troposphere exchange (STE) and of ozone production related to the local biomass burning (BB) emissions is discussed using differences between average values of O3, CO and PV for Southern and Northern Canada or Greenland and two vertical ranges in the troposphere: 0-4 km and 4-8 km. For Canada, the model CO distribution and the weak correlation (less than 30%) of O3 and PV suggests that stratosphere troposphere exchange (STE) is not the major contribution to average tropospheric ozone at latitudes less than 70 deg N, due to the fact that local biomass burning (BB) emissions were significant during the 2008 summer period. Conversely over Greenland, significant STE is found according to the better O3 versus PV
Planning Alternative Organizational Frameworks For a Large Scale Educational Telecommunications System Served by Fixed/Broadcast Satellites. Memorandum Number 73/3.
Walkmeyer, John
Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…
DEFF Research Database (Denmark)
El-Galaly, Tarec Christoffer; Villa, Diego; Michaelsen, Thomas Yssing
2017-01-01
Purpose Development of secondary central nervous system involvement (SCNS) in patients with diffuse large B-cell lymphoma is associated with poor outcomes. The CNS International Prognostic Index (CNS-IPI) has been proposed for identifying patients at greatest risk, but the optimal model is unknow...
Kiers, Henk A.L.; Marchetti, G.M.
1994-01-01
Recently, a number of methods have been proposed for the exploratory analysis of mixtures of qualitative and quantitative variables. In these methods for each variable an object by object similarity matrix is constructed, and these are consequently analyzed by means of three-way methods like
Maeda, Naohiro; Narukawa, Masataka; Ishimaru, Yoshiro; Yamamoto, Kurumi; Misaka, Takumi; Abe, Keiko
2017-05-01
The connections between taste receptor cells (TRCs) and innervating gustatory neurons are formed in a mutually dependent manner during development. To investigate whether a change in the ratio of cell types that compose taste buds influences the number of innervating gustatory neurons, we analyzed the proportion of gustatory neurons that transmit sour taste signals in adult Skn-1a -/- mice in which the number of sour TRCs is greatly increased. We generated polycystic kidney disease 1 like 3-wheat germ agglutinin (pkd1l3-WGA)/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice by crossing Skn-1a -/- mice and pkd1l3-WGA transgenic mice, in which neural pathways of sour taste signals can be visualized. The number of WGA-positive cells in the circumvallate papillae is 3-fold higher in taste buds of pkd1l3-WGA/Skn-1a -/- mice relative to pkd1l3-WGA/Skn-1a +/+ mice. Intriguingly, the ratio of WGA-positive neurons to P2X 2 -expressing gustatory neurons in nodose/petrosal ganglia was similar between pkd1l3-WGA/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice. In conclusion, an alteration in the ratio of cell types that compose taste buds does not influence the number of gustatory neurons that transmit sour taste signals. Copyright © 2017. Published by Elsevier B.V.
Sievert, K-D; Chapple, C; Herschorn, S; Joshi, M; Zhou, J; Nardo, C; Nitti, V W
2014-10-01
A prespecified pooled analysis of two placebo-controlled, phase 3 trials evaluated whether the number of prior anticholinergics used or reason for their discontinuation affected the treatment response to onabotulinumtoxinA 100U in overactive bladder (OAB) patients with urinary incontinence (UI). Patients with symptoms of OAB received intradetrusor injections of onabotulinumtoxinA 100U or placebo, sparing the trigone. Change from baseline at week 12 in UI episodes/day, proportion of patients reporting a positive response ('greatly improved' or 'improved') on the treatment benefit scale (TBS), micturition and urgency were evaluated by number of prior anticholinergics (1, 2 or ≥ 3) and reason for their discontinuation (insufficient efficacy or side effects). Adverse events (AE) were assessed. Patients had taken an average of 2.4 anticholinergics before study enrolment. OnabotulinumtoxinA reduced UI episodes/day from baseline vs. placebo, regardless of the number of prior anticholinergics (-2.82 vs. -1.52 for one prior anticholinergic; -2.58 vs. -0.58 for two prior anticholinergics; and -2.92 vs. -0.73 for three or more prior anticholinergics; all p reason for discontinuation. OnabotulinumtoxinA reduced the episodes of urgency and frequency of micturition vs. placebo in all groups. AEs were well tolerated, with a comparable incidence in all groups. In patients with symptoms of OAB who were inadequately managed by one or more anticholinergics, onabotulinumtoxinA 100U provided significant and similar treatment benefit and safety profile regardless of the number of prior anticholinergics used or reason for inadequate management of OAB. ClinicalTrials.gov: NCT00910845, NCT00910520. © 2014 The Authors. International Journal of Clinical Practice published by John Wiley & Sons Ltd.
DEFF Research Database (Denmark)
Madarasz, Wendy; Manzardo, Ann; Mortensen, Erik Lykke
2012-01-01
Central Psychiatric Research Registry for 8109 birth cohort members aged 45 years. Lifetime psychiatric diagnoses (International Classification of Diseases, Revision 10, group F codes, Mental and Behavioural Disorders, and one Z code) for identified subjects were organized into 14 mutually exclusive......Objective: Psychiatric comorbidities are common among psychiatric patients and typically associated with poorer clinical prognoses. Subjects of a large Danish birth cohort were used to study the relation between mortality and co-occurring psychiatric diagnoses. Method: We searched the Danish...
Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko
In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.
Denman, William T; Tuason, Pacifico M; Ahmed, Mohammed I; Brennen, Loralie M; Cepeda, M Soledad; Carr, Daniel B
2007-02-01
Pediatric sedation is of paramount importance but can be challenging. Fear and anticipatory anxiety before invasive procedures often lead to uncooperativeness. A novel device (PediSedate) provides sedation through a combination of inhaled nitrous oxide and distraction (video game). We evaluated the acceptability and safety of the PediSedate device in children. We enrolled children between 3 and 9 years old who were scheduled to undergo surgical procedures that required general inhalational anesthesia. After the device was applied, he/she played a video game while listening to the audio portion of the game through the earphones. Nitrous oxide in oxygen was administered via the nasal piece of the headset starting at 50% and increasing to 70%, in 10% increments every 8 min. Treatment failures, vital signs, arterial oxygen saturation, depth of sedation, airway patency, side effects, acceptance of the device and parental satisfaction were all evaluated. Of 100 children included, treatment failure occurred in 18% mainly because of poor tolerance of the device. At least 96% of the children who completed the study exhibited an excellent degree of sedation, 22% had side effects, and none experienced serious airway obstruction. Nausea and vomiting were the most common side effects and no patients had hemodynamic instability. The PediSedate device combines nonpharmacologic with pharmacologic methods of sedation. Most of the children we evaluated were able to tolerate the PediSedate device and achieved an adequate degree of sedation.
Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.
2017-09-01
In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.
International Nuclear Information System (INIS)
Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.
2005-01-01
Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)
International Nuclear Information System (INIS)
Husin Wagiran; Wan Mohd Nasir Wan Kadir
1997-01-01
In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1
Kujawski, Joseph T.; Gliese, Ulrik B.; Cao, N. T.; Zeuch, M. A.; White, D.; Chornay, D. J; Lobell, J. V.; Avanov, L. A.; Barrie, A. C.; Mariano, A. J.;
2015-01-01
Each half of the Dual Electron Spectrometer (DES) of the Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission utilizes a microchannel plate Chevron stack feeding 16 separate detection channels each with a dedicated anode and amplifier/discriminator chip. The desire to detect events on a single channel with a temporal spacing of 100 ns and a fixed dead-time drove our decision to use an amplifier/discriminator with a very fast (GHz class) front end. Since the inherent frequency response of each pulse in the output of the DES microchannel plate system also has frequency components above a GHz, this produced a number of design constraints not normally expected in electronic systems operating at peak speeds of 10 MHz. Additional constraints are imposed by the geometry of the instrument requiring all 16 channels along with each anode and amplifier/discriminator to be packaged in a relatively small space. We developed an electrical model for board level interactions between the detector channels to allow us to design a board topology which gave us the best detection sensitivity and lowest channel to channel crosstalk. The amplifier/discriminator output was designed to prevent the outputs from one channel from producing triggers on the inputs of other channels. A number of Radio Frequency design techniques were then applied to prevent signals from other subsystems (e.g. the high voltage power supply, command and data handling board, and Ultraviolet stimulation for the MCP) from generating false events. These techniques enabled us to operate the board at its highest sensitivity when operated in isolation and at very high sensitivity when placed into the overall system.
Large Diversity of Porcine Yersinia enterocolitica 4/O:3 in Eight European Countries Assessed by Multiple-Locus Variable-Number Tandem-Repeat Analysis.
Alakurtti, Sini; Keto-Timonen, Riikka; Virtanen, Sonja; Martínez, Pilar Ortiz; Laukkanen-Ninios, Riikka; Korkeala, Hannu
2016-06-01
A total of 253 multiple-locus variable-number tandem-repeat analysis (MLVA) types among 634 isolates were discovered while studying the genetic diversity of porcine Yersinia enterocolitica 4/O:3 isolates from eight different European countries. Six variable-number tandem-repeat (VNTR) loci V2A, V4, V5, V6, V7, and V9 were used to study the isolates from 82 farms in Belgium (n = 93, 7 farms), England (n = 41, 8 farms), Estonia (n = 106, 12 farms), Finland (n = 70, 13 farms), Italy (n = 111, 20 farms), Latvia (n = 66, 3 farms), Russia (n = 60, 10 farms), and Spain (n = 87, 9 farms). Cluster analysis revealed mainly country-specific clusters, and only one MLVA type consisting of two isolates was found from two countries: Russia and Italy. Also, farm-specific clusters were discovered, but same MLVA types could also be found from different farms. Analysis of multiple isolates originating either from the same tonsils (n = 4) or from the same farm, but 6 months apart, revealed both identical and different MLVA types. MLVA showed a very good discriminatory ability with a Simpson's discriminatory index (DI) of 0.989. DIs for VNTR loci V2A, V4, V5, V6, V7, and V9 were 0.916, 0.791, 0.901, 0.877, 0.912, and 0.785, respectively, when studying all isolates together, but variation was evident between isolates originating from different countries. Locus V4 in the Spanish isolates and locus V9 in the Latvian isolates did not differentiate (DI 0.000), and locus V9 in the English isolates showed very low discriminatory power (DI 0.049). The porcine Y. enterocolitica 4/O:3 isolates were diverse, but the variation in DI demonstrates that the well discriminating loci V2A, V5, V6, and V7 should be included in MLVA protocol when maximal discriminatory power is needed.
Hemmann, Jethro L.; Saurel, Olivier; Ochsner, Andrea M.; Stodden, Barbara K.; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A.
2016-01-01
Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a 13C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. PMID:26895963
Hemmann, Jethro L; Saurel, Olivier; Ochsner, Andrea M; Stodden, Barbara K; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A
2016-04-22
Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a (13)C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Directory of Open Access Journals (Sweden)
Olga I. Hristodulo
2015-01-01
Full Text Available The article covers the urgency of establishing a scientifi c and educational geoportal as a single data center for the Republic of Bashkortostan, providing quick access to a distributed network of geospatial data and geoservices to all responsible and interested parties. We considered the main tasks, functions and architecture of a scientifi c and educational geoportal for different types of users. We also carried out a comparative analysis of the basic technology for the development of mapping services and information systems, representing the major structural elements of geoportals. As an example, we considered information retrieval problems of the scientifi c and educational geoportal for the Republic of Bashkortostan.
Bade, Richard; Bijlsma, Lubertus; Miller, Thomas H; Barron, Leon P; Sancho, Juan Vicente; Hernández, Felix
2015-12-15
The recent development of broad-scope high resolution mass spectrometry (HRMS) screening methods has resulted in a much improved capability for new compound identification in environmental samples. However, positive identifications at the ng/L concentration level rely on analytical reference standards for chromatographic retention time (tR) and mass spectral comparisons. Chromatographic tR prediction can play a role in increasing confidence in suspect screening efforts for new compounds in the environment, especially when standards are not available, but reliable methods are lacking. The current work focuses on the development of artificial neural networks (ANNs) for tR prediction in gradient reversed-phase liquid chromatography and applied along with HRMS data to suspect screening of wastewater and environmental surface water samples. Based on a compound tR dataset of >500 compounds, an optimized 4-layer back-propagation multi-layer perceptron model enabled predictions for 85% of all compounds to within 2min of their measured tR for training (n=344) and verification (n=100) datasets. To evaluate the ANN ability for generalization to new data, the model was further tested using 100 randomly selected compounds and revealed 95% prediction accuracy within the 2-minute elution interval. Given the increasing concern on the presence of drug metabolites and other transformation products (TPs) in the aquatic environment, the model was applied along with HRMS data for preliminary identification of pharmaceutically-related compounds in real samples. Examples of compounds where reference standards were subsequently acquired and later confirmed are also presented. To our knowledge, this work presents for the first time, the successful application of an accurate retention time predictor and HRMS data-mining using the largest number of compounds to preliminarily identify new or emerging contaminants in wastewater and surface waters. Copyright © 2015 Elsevier B.V. All rights
Ribéreau-Gayon, Agathe; Rando, Carolyn; Schuliar, Yves; Chapenoire, Stéphane; Crema, Enrico R; Claes, Julien; Seret, Bernard; Maleret, Vincent; Morgan, Ruth M
2017-03-01
Accurate determination of the origin and timing of trauma is key in medicolegal investigations when the cause and manner of death are unknown. However, distinction between criminal and accidental perimortem trauma and postmortem modifications can be challenging when facing unidentified trauma. Postmortem examination of the immersed victims of the Yemenia airplane crash (Comoros, 2009) demonstrated the challenges in diagnosing extensive unusual circular lesions found on the corpses. The objective of this study was to identify the origin and timing of occurrence (peri- or postmortem) of the lesions.A retrospective multidisciplinary study using autopsy reports (n = 113) and postmortem digital photos (n = 3 579) was conducted. Of the 113 victims recovered from the crash, 62 (54.9 %) presented unusual lesions (n = 560) with a median number of 7 (IQR 3 ∼ 13) and a maximum of 27 per corpse. The majority of lesions were elliptic (58 %) and had an area smaller than 10 cm 2 (82.1 %). Some lesions (6.8 %) also showed clear tooth notches on their edges. These findings identified most of the lesions as consistent with postmortem bite marks from cookiecutter sharks (Isistius spp.). It suggests that cookiecutter sharks were important agents in the degradation of the corpses and thus introduced potential cognitive bias in the research of the cause and manner of death. A novel set of evidence-based identification criteria for cookiecutter bite marks on human bodies is developed to facilitate more accurate medicolegal diagnosis of cookiecutter bites.
Therapy Provider Phase Information
U.S. Department of Health & Human Services — The Therapy Provider Phase Information dataset is a tool for providers to search by their National Provider Identifier (NPI) number to determine their phase for...
Directory of Open Access Journals (Sweden)
Tiessen Axel
2012-02-01
Full Text Available Abstract Background The sizes of proteins are relevant to their biochemical structure and for their biological function. The statistical distribution of protein lengths across a diverse set of taxa can provide hints about the evolution of proteomes. Results Using the full genomic sequences of over 1,302 prokaryotic and 140 eukaryotic species two datasets containing 1.2 and 6.1 million proteins were generated and analyzed statistically. The lengthwise distribution of proteins can be roughly described with a gamma type or log-normal model, depending on the species. However the shape parameter of the gamma model has not a fixed value of 2, as previously suggested, but varies between 1.5 and 3 in different species. A gamma model with unrestricted shape parameter described best the distributions in ~48% of the species, whereas the log-normal distribution described better the observed protein sizes in 42% of the species. The gamma restricted function and the sum of exponentials distribution had a better fitting in only ~5% of the species. Eukaryotic proteins have an average size of 472 aa, whereas bacterial (320 aa and archaeal (283 aa proteins are significantly smaller (33-40% on average. Average protein sizes in different phylogenetic groups were: Alveolata (628 aa, Amoebozoa (533 aa, Fornicata (543 aa, Placozoa (453 aa, Eumetazoa (486 aa, Fungi (487 aa, Stramenopila (486 aa, Viridiplantae (392 aa. Amino acid composition is biased according to protein size. Protein length correlated negatively with %C, %M, %K, %F, %R, %W, %Y and positively with %D, %E, %Q, %S and %T. Prokaryotic proteins had a different protein size bias for %E, %G, %K and %M as compared to eukaryotes. Conclusions Mathematical modeling of protein length empirical distributions can be used to asses the quality of small ORFs annotation in genomic releases (detection of too many false positive small ORFs. There is a negative correlation between average protein size and total number of
Nakayama, Noriyuki; Yano, Hirohito; Egashira, Yusuke; Enomoto, Yukiko; Ohe, Naoyuki; Kanemura, Nobuhiro; Kitagawa, Junichi; Iwama, Toru
2018-01-01
Commercially available fibrin glue (Com-FG), which is used commonly worldwide, is produced with pooled human plasma from multiple donors. However, it has added bovine aprotinin, which involves the risk of infection, allogenic immunity, and allergic reactions. We evaluate the efficacy, reliability, and safety of completely autologous fibrin glue (CAFG). From August 2014 to February 2016, prospective data were collected and analyzed from 153 patients. CAFG was prepared with the CryoSeal System using autologous blood and was applied during neurosurgical procedures. Using CAFG-soaked oxidized regenerated cellulose and/or polyglycolic acid sheets, we performed a pinpoint hemostasis, transposed the offending vessels in a microvascular decompression, and covered the dural incision to prevent cerebrospinal fluid leakage. The CryoSeal System had generated up to a mean of 4.51 mL (range, 3.0-8.4 mL) of CAFG from 400 mL autologous blood. Com-FG products were not used in our procedures. Only 6 patients required an additional allogeneic blood transfusion. The hemostatic effective rate was 96.1% (147 of 153 patients). Only 1 patient who received transsphenoidal surgery for a pituitary adenoma presented with the complication of delayed postoperative cerebrospinal fluid leakage (0.65%). No patient developed allergic reactions or systemic complications associated with the use of CAFG. CAFG effectively provides hemostatic, adhesive, and safety performance. The timing and three-dimensional shape of CAFG-soaked oxidized regenerated cellulose and/or polyglycolic acid sheets solidification can be controlled with slow fibrin formation. The cost to prepare CAFG is similar compared with Com-FG products, and it can therefore be easily used at most institutions. Copyright © 2017 Elsevier Inc. All rights reserved.
Medicare Provider Data - Hospice Providers
U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...
Hyperreal Numbers for Infinite Divergent Series
Bartlett, Jonathan
2018-01-01
Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent serie...
Koninck, Jean-Marie De
2009-01-01
Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n
Gaming the Law of Large Numbers
Hoffman, Thomas R.; Snapp, Bart
2012-01-01
Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…
Indian Academy of Sciences (India)
Admin
Triangular number, figurate num- ber, rangoli, Brahmagupta–Pell equation, Jacobi triple product identity. Figure 1. The first four triangular numbers. Left: Anuradha S Garge completed her PhD from. Pune University in 2008 under the supervision of Prof. S A Katre. Her research interests include K-theory and number theory.
Directory of Open Access Journals (Sweden)
Schwarzweller Christoph
2015-02-01
Full Text Available In this article we introduce Proth numbers and prove two theorems on such numbers being prime [3]. We also give revised versions of Pocklington’s theorem and of the Legendre symbol. Finally, we prove Pepin’s theorem and that the fifth Fermat number is not prime.
Mendonça, J. Ricardo G.
2012-01-01
We define a new class of numbers based on the first occurrence of certain patterns of zeros and ones in the expansion of irracional numbers in a given basis and call them Sagan numbers, since they were first mentioned, in a special case, by the North-american astronomer Carl E. Sagan in his science-fiction novel "Contact." Sagan numbers hold connections with a wealth of mathematical ideas. We describe some properties of the newly defined numbers and indicate directions for further amusement.
Mojola, Sanyu A
2014-01-01
This paper draws on ethnographic and interview based fieldwork to explore accounts of intimate relationships between widowed women and poor young men that emerged in the wake of economic crisis and a devastating HIV epidemic among the Luo ethnic group in Western Kenya. I show how the cooptation of widow inheritance practices in the wake of an overwhelming number of widows as well as economic crisis resulted in widows becoming providing women and poor young men becoming kept men. I illustrate how widows in this setting, by performing a set of practices central to what it meant to be a man in this society – pursuing and providing for their partners - were effectively doing masculinity. I will also show how young men, rather than being feminized by being kept, deployed other sets of practices to prove their masculinity and live in a manner congruent with cultural ideals. I argue that ultimately, women’s practice of masculinity in large part seemed to serve patriarchal ends. It not only facilitated the fulfillment of patriarchal expectations of femininity – to being inherited – but also served, in the end, to provide a material base for young men’s deployment of legitimizing and culturally valued sets of masculine practice. PMID:25489121
Directory of Open Access Journals (Sweden)
Kopij Grzegorz
2017-12-01
Full Text Available During the years 1994–2009, the number of White Stork pairs breeding in the city of Wrocław (293 km2 fluctuated between 5 pairs in 1999 and 19 pairs 2004. Most nests were clumped in two sites in the Odra river valley. Two nests were located only cca. 1 km from the city hall. The fluctuations in numbers can be linked to the availability of feeding grounds and weather. In years when grass was mowed in the Odra valley, the number of White Storks was higher than in years when the grass was left unattended. Overall, the mean number of fledglings per successful pair during the years 1995–2009 was slightly higher in the rural than in the urban area. Contrary to expectation, the mean number of fledglings per successful pair was the highest in the year of highest population density. In two rural counties adjacent to Wrocław, the number of breeding pairs was similar to that in the city in 1994/95 (15 vs. 13 pairs. However, in 2004 the number of breeding pairs in the city almost doubled compared to that in the neighboring counties (10 vs. 19 pairs. After a sharp decline between 2004 and 2008, populations in both areas were similar in 2009 (5 vs. 4 pairs, but much lower than in 1994–1995. Wrocław is probably the only large city (>100,000 people in Poland, where the White Stork has developed a sizeable, although fluctuating, breeding population. One of the most powerful role the city-nesting White Storks may play is their ability to engage directly citizens with nature and facilitate in that way environmental education and awareness.
Petersen, T Kyle
2015-01-01
This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gamma-nonnegativity and real-rootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group. The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions. The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, ...
Indian Academy of Sciences (India)
Transfinite Numbers. What is Infinity? S M Srivastava. In a series of revolutionary articles written during the last quarter of the nineteenth century, the great Ger- man mathematician Georg Cantor removed the age-old mistrust of infinity and created an exceptionally beau- tiful and useful theory of transfinite numbers. This is.
Energy Technology Data Exchange (ETDEWEB)
Arkesteijn, L; Van Huis, G; Reckman, E
1987-01-01
The Dutch government aims to realize a wind power capacity in The Netherlands of 1000 MW in the year 2000. Environmental impacts of the erection of a large number of 200 kW and 1 MW wind turbines are studied. Four siting models have been developed in which attention is paid to environmental and economic aspects, the possibilities to introduce the electric power into the national power grid and the availability and reliability of enough wind. Noise pollution and danger for birds are to be avoided. The choice between the construction of wind parks where a number of wind turbines is concentrated in a small area or a more dispersed construction is somewhat difficult if all relevant factors are to be taken into consideration. Without government's interference the target of 1000 MW in the year 2000 will probably not be attained. It is therefore desirable to practise an active energy policy in favor of wind energy, for which many ways are possible.
Ji, Caleb; Khovanova, Tanya; Park, Robin; Song, Angela
2015-01-01
In this paper, we consider a game played on a rectangular $m \\times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain. This paper enumerates the number of ways to break an $m \\times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we p...
Neutrino number of the universe
International Nuclear Information System (INIS)
Kolb, E.W.
1981-01-01
The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references
Number names and number understanding
DEFF Research Database (Denmark)
Ejersbo, Lisser Rye; Misfeldt, Morten
2014-01-01
This paper concerns the results from the first year of a three-year research project involving the relationship between Danish number names and their corresponding digits in the canonical base 10 system. The project aims to develop a system to help the students’ understanding of the base 10 syste...... the Danish number names are more complicated than in other languages. Keywords: A research project in grade 0 and 1th in a Danish school, Base-10 system, two-digit number names, semiotic, cognitive perspectives....
Templates, Numbers & Watercolors.
Clemesha, David J.
1990-01-01
Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)
Driscoll, Daniel G.; Bunkers, Matthew J.; Carter, Janet M.; Stamm, John F.; Williamson, Joyce E.
2010-01-01
The Black Hills area of western South Dakota has a history of damaging flash floods that have resulted primarily from exceptionally strong rain-producing thunderstorms. The best known example is the catastrophic storm system of June 9-10, 1972, which caused severe flooding in several major drainages near Rapid City and resulted in 238 deaths. More recently, severe thunderstorms caused flash flooding near Piedmont and Hermosa on August 17, 2007. Obtaining a thorough understanding of peak-flow characteristics for low-probability floods will require a comprehensive long-term approach involving (1) documentation of scientific information for extreme events such as these; (2) long-term collection of systematic peak-flow records; and (3) regional assessments of a wide variety of peak-flow information. To that end, the U.S. Geological Survey cooperated with the South Dakota Department of Transportation and National Weather Service to produce this report, which provides documentation regarding the August 17, 2007, storm and associated flooding and provides a context through examination of other large storm and flood events in the Black Hills area. The area affected by the August 17, 2007, storms and associated flooding generally was within the area affected by the larger storm of June 9-10, 1972. The maximum observed 2007 precipitation totals of between 10.00 and 10.50 inches occurred within about 2-3 hours in a small area about 5 miles west of Hermosa. The maximum documented precipitation amount in 1972 was 15.0 inches, and precipitation totals of 10.0 inches or more were documented for 34 locations within an area of about 76 square miles. A peak flow of less than 1 cubic foot per second occurred upstream from the 2007 storm extent for streamflow-gaging station 06404000 (Battle Creek near Keystone); whereas, the 1972 peak flow of 26,200 cubic feet per second was large, relative to the drainage area of only 58.6 square miles. Farther downstream along Battle Creek, a 2007
Indian Academy of Sciences (India)
this is a characteristic difference between finite and infinite sets and created an immensely useful branch of mathematics based on this idea which had a great impact on the whole of mathe- matics. For example, the question of what is a number (finite or infinite) is almost a philosophical one. However Cantor's work turned it ...
Chon, Yongho
2013-01-01
One of the main reasons for reforming long-term care systems is a deficient existing service infrastructure for the elderly. This article provides an overview of why and how the Korean government expanded long-term care infrastructure through the introduction of a new compulsory insurance system, with a particular focus on the market-friendly policies used to expand the infrastructure. Then, the positive results of the expansion of the long-term care infrastructure and the challenges that have emerged are examined. Finally, it is argued that the Korean government should actively implement a range of practical policies and interventions within the new system.
International Nuclear Information System (INIS)
Todorov, T.D.
1980-01-01
The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed
Varadhan, S R S
2016-01-01
The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.
Wu, Liang; Ehlin-Henriksson, Barbro; Zhou, Xiaoying; Zhu, Hong; Ernberg, Ingemar; Kis, Lorand L; Klein, George
2017-12-01
Diffuse large B-cell lymphoma (DLBCL), the most common type of malignant lymphoma, accounts for 30% of adult non-Hodgkin lymphomas. Epstein-Barr virus (EBV) -positive DLBCL of the elderly is a newly recognized subtype that accounts for 8-10% of DLBCLs in Asian countries, but is less common in Western populations. Five DLBCL-derived cell lines were employed to characterize patterns of EBV latent gene expression, as well as response to cytokines and chemotaxis. Interleukin-4 and interleukin-21 modified LMP1, EBNA1 and EBNA2 expression depending on cell phenotype and type of EBV latent programme (type I, II or III). These cytokines also affected CXCR4- or CCR7-mediated chemotaxis in two of the cell lines, Farage (type III) and Val (type II). Further, we investigated the effect of EBV by using dominant-negative EBV nuclear antigen 1(dnEBNA1) to eliminate EBV genomes. This resulted in decreased chemotaxis. By employing an alternative way to eliminate EBV genomes, Roscovitine, we show an increase of apoptosis in the EBV-positive lines. These results show that EBV plays an important role in EBV-positive DLBCL lines with regard to survival and chemotactic response. Our findings provide evidence for the impact of microenvironment on EBV-carrying DLBCL cells and might have therapeutic implications. © 2017 John Wiley & Sons Ltd.
Provider software buyer's guide.
1994-03-01
To help long term care providers find new ways to improve quality of care and efficiency, Provider magazine presents the fourth annual listing of software firms marketing computer programs for all areas of nursing facility operations. On the following five pages, more than 80 software firms display their wares, with programs such as minimum data set and care planning, dietary, accounting and financials, case mix, and medication administration records. The guide also charts compatible hardware, integration ability, telephone numbers, company contacts, and easy-to-use reader service numbers.
High Reynolds Number Turbulence
National Research Council Canada - National Science Library
Smits, Alexander J
2007-01-01
The objectives of the grant were to provide a systematic study to fill the gap between existing research on low Reynolds number turbulent flows to the kinds of turbulent flows encountered on full-scale vehicles...
The MIXMAX random number generator
Savvidy, Konstantin G.
2015-11-01
In this paper, we study the randomness properties of unimodular matrix random number generators. Under well-known conditions, these discrete-time dynamical systems have the highly desirable K-mixing properties which guarantee high quality random numbers. It is found that some widely used random number generators have poor Kolmogorov entropy and consequently fail in empirical tests of randomness. These tests show that the lowest acceptable value of the Kolmogorov entropy is around 50. Next, we provide a solution to the problem of determining the maximal period of unimodular matrix generators of pseudo-random numbers. We formulate the necessary and sufficient condition to attain the maximum period and present a family of specific generators in the MIXMAX family with superior performance and excellent statistical properties. Finally, we construct three efficient algorithms for operations with the MIXMAX matrix which is a multi-dimensional generalization of the famous cat-map. First, allowing to compute the multiplication by the MIXMAX matrix with O(N) operations. Second, to recursively compute its characteristic polynomial with O(N2) operations, and third, to apply skips of large number of steps S to the sequence in O(N2 log(S)) operations.
Diamond, Harold G; Cheung, Man Ping
2016-01-01
"Generalized numbers" is a multiplicative structure introduced by A. Beurling to study how independent prime number theory is from the additivity of the natural numbers. The results and techniques of this theory apply to other systems having the character of prime numbers and integers; for example, it is used in the study of the prime number theorem (PNT) for ideals of algebraic number fields. Using both analytic and elementary methods, this book presents many old and new theorems, including several of the authors' results, and many examples of extremal behavior of g-number systems. Also, the authors give detailed accounts of the L^2 PNT theorem of J. P. Kahane and of the example created with H. L. Montgomery, showing that additive structure is needed for proving the Riemann hypothesis. Other interesting topics discussed are propositions "equivalent" to the PNT, the role of multiplicative convolution and Chebyshev's prime number formula for g-numbers, and how Beurling theory provides an interpretation of the ...
Talking probabilities: communicating probalistic information with words and numbers
Renooij, S.; Witteman, C.L.M.
1999-01-01
The number of knowledge-based systems that build on Bayesian belief networks is increasing. The construction of such a network however requires a large number of probabilities in numerical form. This is often considered a major obstacle, one of the reasons being that experts are reluctant to provide
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Directory of Open Access Journals (Sweden)
Oli Brown
2008-10-01
Full Text Available Estimates of the potential number of ‘climate changemigrants’ vary hugely. In order to persuade policymakers ofthe need to act and to provide a sound basis for appropriateresponses, there is an urgent need for better analysis, betterdata and better predictions.
Directory of Open Access Journals (Sweden)
R. A. Mollin
1986-01-01
Full Text Available A powerful number is a positive integer n satisfying the property that p2 divides n whenever the prime p divides n; i.e., in the canonical prime decomposition of n, no prime appears with exponent 1. In [1], S.W. Golomb introduced and studied such numbers. In particular, he asked whether (25,27 is the only pair of consecutive odd powerful numbers. This question was settled in [2] by W.A. Sentance who gave necessary and sufficient conditions for the existence of such pairs. The first result of this paper is to provide a generalization of Sentance's result by giving necessary and sufficient conditions for the existence of pairs of powerful numbers spaced evenly apart. This result leads us naturally to consider integers which are representable as a proper difference of two powerful numbers, i.e. n=p1−p2 where p1 and p2 are powerful numbers with g.c.d. (p1,p2=1. Golomb (op.cit. conjectured that 6 is not a proper difference of two powerful numbers, and that there are infinitely many numbers which cannot be represented as a proper difference of two powerful numbers. The antithesis of this conjecture was proved by W.L. McDaniel [3] who verified that every non-zero integer is in fact a proper difference of two powerful numbers in infinitely many ways. McDaniel's proof is essentially an existence proof. The second result of this paper is a simpler proof of McDaniel's result as well as an effective algorithm (in the proof for explicitly determining infinitely many such representations. However, in both our proof and McDaniel's proof one of the powerful numbers is almost always a perfect square (namely one is always a perfect square when n≢2(mod4. We provide in §2 a proof that all even integers are representable in infinitely many ways as a proper nonsquare difference; i.e., proper difference of two powerful numbers neither of which is a perfect square. This, in conjunction with the odd case in [4], shows that every integer is representable in
International Nuclear Information System (INIS)
Hellmann, U.
1992-01-01
Participation of the public in licensing procedures for large-scale projects has been an item of discussion since the sixties in the legal sciences and on the political level. The introduction of the environmental impact assessment (EIA) as a legal requirement in EC law and its implementation in practice was the occasion to once again investigate the principle of participation of the public in the current legal framework. The study in hand reviews the legal provisions found in administrative law, constitutional law and European Community law governing the right of participation of the public and also takes a look at the situation in practice. The results show both the legal status and conditions of enforcement as prevailing after the coming into force in 1989 of the Act on Performance of an EIA, as well as inadequacies and deficits in the current legal framework. (orig.) [de
Earthquake number forecasts testing
Kagan, Yan Y.
2017-10-01
We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness
DEFF Research Database (Denmark)
Korsby, Trine Mygind
2017-01-01
Taking a point of departure in negotiations for access to a phone number for a brothel abroad, the article demonstrates how a group of pimps in Eastern Romania attempt to extend their local business into the rest of the EU. The article shows how the phone number works as a micro-infrastructure in......Taking a point of departure in negotiations for access to a phone number for a brothel abroad, the article demonstrates how a group of pimps in Eastern Romania attempt to extend their local business into the rest of the EU. The article shows how the phone number works as a micro...... in turn cultivate and maximize uncertainty about themselves in others. When making the move to go abroad into unknown terrains, accessing the infrastructure generated by the phone number can provide certainty and consolidate one’s position within criminal networks abroad. However, at the same time......, mishandling the phone number can be dangerous and in that sense produce new doubts and uncertainties....
Energy Technology Data Exchange (ETDEWEB)
Paquette, S.; Bonin, H.W.; Baskin, M.; Bowen, K.; Switzer, Z., E-mail: Stephane.Paquette@rmc.ca [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, Ontario (Canada)
2014-07-01
A feasibility study of a power plant using the Super Near Boiling 25 MWt (SNB25) nuclear reactor as a heat source and capable of supporting the electrical and thermal requirements for a base the size of Canadian Forces Base (CFB) Kingston in the Arctic was carried out. Such a power plant would allow the Canadian Armed Forces (CAF) to have a self-sustaining operational base in the Arctic to conduct Search and Rescue (SAR) and sovereignty missions. The thermal and electrical requirements for a base the size of CFB Kingston are determined to be 31.63 MWt and 7.16 MWe, respectively. Using the Heating Degree Days (HDD) approach to account for temperature differences between Southern Ontario and the Arctic, a base the size of CFB Kingston in the Arctic would require 75.16 MWt to operate. A chemical engineering software program, UniSim, was used to simulate the energy cycle of the base which consisted of a district heating loop to provide hot water and an Organic Rankine Cycle (ORC) using n-pentane as the working fluid to provide the electrical energy. The UniSim simulations determined that the cycle would use six shell and tube heat exchangers, two axial gas turbines coupled to generators, and twelve centrifugal pumps, in addition to a group of five SNB25 reactors that could provide 25.03 MWt and 2.63 MWe to a base in the Arctic with energy requirements about a third of those of CFB Kingston. The design foresees redundancy which is essential to safe operation in the Arctic. (author)
International Nuclear Information System (INIS)
Malcolm J. Andrews
2006-01-01
This project had two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. This report describes work done in the last twelve (12) months of the project, and also contains a summary of the complete work done over the three (3) life of the project. As of April 1, 2006, the air/helium facility (Task 1) is now complete and extensive testing and validation of diagnostics has been performed. Initial condition studies (Task 2) is also complete. Detailed experiments with air/helium with Atwood numbers up to 0.1 have been completed, and Atwood numbers of 0.25. Within the last three (3) months we have been able to successfully run the facility at Atwood numbers of 0.5. The progress matches the project plan, as does the budget. We have finished the initial condition studies using the water channel, and this work has been accepted for publication on the Journal of Fluid Mechanics (the top fluid mechanics journal). Mr. Nick Mueschke and Mr. Wayne Kraft are continuing with their studies to obtain PhDs in the same field, and will also continue their collaboration visits to LANL and LLNL. Over its three (3) year life the project has supported two(2) Ph.D.'s and three (3) MS's, and produced nine (9) international journal publications, twenty four (24) conference publications, and numerous other reports. The highlight of the project has been our close collaboration with LLNL (Dr. Oleg Schilling) and LANL (Drs. Dimonte, Ristorcelli, Gore, and Harlow)
DEFF Research Database (Denmark)
Levin, Bruce R; McCall, Ingrid C.; Perrot, Veronique
2017-01-01
We postulate that the inhibition of growth and low rates of mortality of bacteria exposed to ribosome-binding antibiotics deemed bacteriostatic can be attributed almost uniquely to these drugs reducing the number of ribosomes contributing to protein synthesis, i.e., the number of effective......-targeting bacteriostatic antibiotics, the time before these bacteria start to grow again when the drugs are removed, referred to as the post-antibiotic effect (PAE), is markedly greater for constructs with fewer rrn operons than for those with more rrn operons. We interpret the results of these other experiments reported...... here as support for the hypothesis that the reduction in the effective number of ribosomes due to binding to these structures provides a sufficient explanation for the action of bacteriostatic antibiotics that target these structures....
Investigating the Randomness of Numbers
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Number Sense on the Number Line
Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni
2018-01-01
A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…
Energy Technology Data Exchange (ETDEWEB)
Nelson, R.N. (ed.)
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
International Nuclear Information System (INIS)
Nelson, R.N.
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name
Richardson, Thomas M.
2014-01-01
We introduce the super Patalan numbers, a generalization of the super Catalan numbers in the sense of Gessel, and prove a number of properties analagous to those of the super Catalan numbers. The super Patalan numbers generalize the super Catalan numbers similarly to how the Patalan numbers generalize the Catalan numbers.
International Nuclear Information System (INIS)
Mallozzi, P.J.; Epstein, H.M.
1985-01-01
This invention provides an apparatus for providing x-rays to an object that may be in an ordinary environment such as air at approximately atmospheric pressure. The apparatus comprises: means (typically a laser beam) for directing energy onto a target to produce x-rays of a selected spectrum and intensity at the target; a fluid-tight enclosure around the target; means for maintaining the pressure in the first enclosure substantially below atmospheric pressure; a fluid-tight second enclosure adjoining the first enclosure, the common wall portion having an opening large enough to permit x-rays to pass through but small enough to allow the pressure reducing means to evacuate gas from the first enclosure at least as fast as it enters through the opening; the second enclosure filled with a gas that is highly transparent to x-rays; the wall of the second enclosure to which the x-rays travel having a portion that is highly transparent to x-rays (usually a beryllium or plastic foil), so that the object to which the x-rays are to be provided may be located outside the second enclosure and adjacent thereto and thus receive the x-rays substantially unimpeded by air or other intervening matter. The apparatus is particularly suited to obtaining EXAFS (extended x-ray fine structure spectroscopy) data on a material
Why healthcare providers merge.
Postma, Jeroen; Roos, Anne-Fleur
2016-04-01
In many OECD countries, healthcare sectors have become increasingly concentrated as a result of mergers. However, detailed empirical insight into why healthcare providers merge is lacking. Also, we know little about the influence of national healthcare policies on mergers. We fill this gap in the literature by conducting a survey study on mergers among 848 Dutch healthcare executives, of which 35% responded (resulting in a study sample of 239 executives). A total of 65% of the respondents was involved in at least one merger between 2005 and 2012. During this period, Dutch healthcare providers faced a number of policy changes, including increasing competition, more pressure from purchasers, growing financial risks, de-institutionalisation of long-term care and decentralisation of healthcare services to municipalities. Our empirical study shows that healthcare providers predominantly merge to improve the provision of healthcare services and to strengthen their market position. Also efficiency and financial reasons are important drivers of merger activity in healthcare. We find that motives for merger are related to changes in health policies, in particular to the increasing pressure from competitors, insurers and municipalities.
March, S; Powietzka, J; Stallmann, C; Swart, E
2015-02-01
Since 1970 the health insurance system in Germany has shrunk by more than 90% to 132 statutory health insurance funds (SHI) at present. For studies using data from different SHI, this development means a reduction of contacts and a higher workload when requesting data. The latter is due to the fact that fusions bind resources in the health insurance funds. In order to avoid selection in studies among the insured, all SHI must be contacted. Additionally, 15 controlling institutions on the state and national level have to agree as determined in § 75 of the German Social Code number 10. The lidA study - a German cohort study on work, age and health intends to link primary and secondary data from all SHI of those insured who have given their agreement for participation. Since the beginning of the study in 2009 the number of SHI has been reduced by 70. Of the 6 585 interviews in 2011 approximately half of the interviewees agreed in written form that their individual health insurance data can be linked. This portion of the insured is dispersed among 95 SHI. At this point, 11 contracts with SHI are realised (approximately 50% of the insured) and 8 data controlling authorities have been contacted. The problems involved in the fusion of SHI and its meaning for research are explained in this article. The fusion of SHI makes sense for the long term. It will lead to a reduction of contacts and contracts that researchers have to establish in order to analyse the data. Therefore, this article also discusses the alternative of creating a meta-data set of all the data from the different SHI combined. © Georg Thieme Verlag KG Stuttgart · New York.
Schmidt number for quantum operations
International Nuclear Information System (INIS)
Huang Siendong
2006-01-01
To understand how entangled states behave under local quantum operations is an open problem in quantum-information theory. The Jamiolkowski isomorphism provides a natural way to study this problem in terms of quantum states. We introduce the Schmidt number for quantum operations by this duality and clarify how the Schmidt number of a quantum state changes under a local quantum operation. Some characterizations of quantum operations with Schmidt number k are also provided
Elementary number theory with programming
Lewinter, Marty
2015-01-01
A successful presentation of the fundamental concepts of number theory and computer programming Bridging an existing gap between mathematics and programming, Elementary Number Theory with Programming provides a unique introduction to elementary number theory with fundamental coverage of computer programming. Written by highly-qualified experts in the fields of computer science and mathematics, the book features accessible coverage for readers with various levels of experience and explores number theory in the context of programming without relying on advanced prerequisite knowledge and con
2004-01-01
Thanks to CERN's team of surveyors, the Organization's stand at the Night of Science attracted a large number of visitors : the technology and tools used by the surveyors, such as the Terrameter shown here, attracted many visitors to the CERN stand
Energy Technology Data Exchange (ETDEWEB)
Alday, Luis F.; Bissi, Agnese; Łukowski, Tomasz [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter,Woodstock Road, Oxford, OX2 6GG (United Kingdom)
2015-11-16
Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new “reciprocity' principle for structure constants. We argue that these results extend also to non-conformal theories.
International Nuclear Information System (INIS)
Alday, Luis F.; Bissi, Agnese; Łukowski, Tomasz
2015-01-01
Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new “reciprocity' principle for structure constants. We argue that these results extend also to non-conformal theories.
Number-unconstrained quantum sensing
Mitchell, Morgan W.
2017-12-01
Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.
Chunking of Large Multidimensional Arrays
Energy Technology Data Exchange (ETDEWEB)
Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar
2007-02-28
Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.
Large transverse momentum phenomena
International Nuclear Information System (INIS)
Brodsky, S.J.
1977-09-01
It is pointed out that it is particularly significant that the quantum numbers of the leading particles are strongly correlated with the quantum numbers of the incident hadrons indicating that the valence quarks themselves are transferred to large p/sub t/. The crucial question is how they get there. Various hadron reactions are discussed covering the structure of exclusive reactions, inclusive reactions, normalization of inclusive cross sections, charge correlations, and jet production at large transverse momentum. 46 references
Outpatient provider concentration and commercial colonoscopy prices.
Pozen, Alexis
2015-01-01
The objective was to evaluate the magnitude of various contributors to outpatient commercial colonoscopy prices, including market- and provider-level factors, especially market share. We used adjudicated fee-for-service facility claims from a large commercial insurer for colonoscopies occurring in hospital outpatient department or ambulatory surgery center from October 2005 to December 2012. Claims were matched to provider- and market-level data. Linear fixed effects regressions of negotiated colonoscopy price were run on provider, system, and market characteristics. Markets were defined as counties. There were 178,433 claims from 169 providers (104 systems). The mean system market share was 76% (SD = 0.34) and the mean real (deflated) price was US$1363 (SD = 374), ranging from US$169 to US$2748. For every percentage point increase in a system or individual facility's bed share, relative price increased by 2 to 4 percentage points; this result was stable across a number of specifications. Market population and price were also consistently positively related, though this relation was small in magnitude. No other factor explained price as strongly as market share. Price variation for colonoscopy was driven primarily by market share, of particular concern as the number of mergers increases in wake of the recession and the Affordable Care Act. Whether variation is justified by better quality care requires further research to determine whether quality is subsumed in prices. © The Author(s) 2015.
Number words and number symbols a cultural history of numbers
Menninger, Karl
1992-01-01
Classic study discusses number sequence and language and explores written numerals and computations in many cultures. "The historian of mathematics will find much to interest him here both in the contents and viewpoint, while the casual reader is likely to be intrigued by the author's superior narrative ability.
Photon number projection using non-number-resolving detectors
International Nuclear Information System (INIS)
Rohde, Peter P; Webb, James G; Huntington, Elanor H; Ralph, Timothy C
2007-01-01
Number-resolving photo-detection is necessary for many quantum optics experiments, especially in the application of entangled state preparation. Several schemes have been proposed for approximating number-resolving photo-detection using non-number-resolving detectors. Such techniques include multi-port detection and time-division multiplexing. We provide a detailed analysis and comparison of different number-resolving detection schemes, with a view to creating a useful reference for experimentalists. We show that the ideal architecture for projective measurements is a function of the detector's dark count and efficiency parameters. We also describe a process for selecting an appropriate topology given actual experimental component parameters
Visuospatial Priming of the Mental Number Line
Stoianov, Ivilin; Kramer, Peter; Umilta, Carlo; Zorzi, Marco
2008-01-01
It has been argued that numbers are spatially organized along a "mental number line" that facilitates left-hand responses to small numbers, and right-hand responses to large numbers. We hypothesized that whenever the representations of visual and numerical space are concurrently activated, interactions can occur between them, before response…
Essays on the theory of numbers
Dedekind, Richard
1963-01-01
Two classic essays by great German mathematician: one provides an arithmetic, rigorous foundation for the irrational numbers, the other is an attempt to give the logical basis for transfinite numbers and properties of the natural numbers.
teaching multiplication of large positive whole numbers using ...
African Journals Online (AJOL)
KEY WORDS: Grating Method, History of Mathematics, Long Multiplication. ... The Wolfram mathworld (n.d.) opined that the ... A further simple random sampling was carried out to select an intact class of 40 students from each of the sampled ...
Boll weevil: experimental sterilization of large numbers by fractionated irradiation
International Nuclear Information System (INIS)
Haynes, J.W.; Wright, J.E.; Davich, T.B.; Roberson, J.; Griffin, J.G.; Darden, E.
1978-01-01
Boll weevils, Anthonomus grandis grandis Boheman, 9 days after egg implantation in the larval diet were transported from the Boll Weevil Research Laboratory, Mississippi State, MS, to the Comparative Animal Research Laboratory, Oak Ridge, TN, and irradiated with 6.9 krad (test 1) or 7.2 krad (test 2) of 60 Co gamma rays delivered in 25 equal doses over 100 h. In test 1, from 600 individual pairs of T (treated) males x N (normal) females, only 114 eggs hatched from a sample of 950 eggs, and 47 adults emerged from a sample of 1042 eggs. Also, from 600 pairs of T females x N males, 6 eggs hatched of a sample of 6 eggs and 12 adults emerged from a sample of 20 eggs. In test 2, from 700 individual pairs of T males x N females, 54 eggs hatched from a sample of 1510, and 10 adults emerged from a sample of 1703 eggs. Also, in T females x N males matings, 1 egg hatched of a sample of 3, and no adults emerged from a sample of 4. Transportation and handling in the 2nd test reduced adult emergence an avg of 49%. Thus the 2 replicates in test 2 resulted in 3.4 x 10 5 and 4.3 x 10 5 irradiated weevils emerging/day for 7 days. Bacterial contamination of weevils was low
Large numbers hypothesis. IV - The cosmological constant and quantum physics
Adams, P. J.
1983-01-01
In standard physics quantum field theory is based on a flat vacuum space-time. This quantum field theory predicts a nonzero cosmological constant. Hence the gravitational field equations do not admit a flat vacuum space-time. This dilemma is resolved using the units covariant gravitational field equations. This paper shows that the field equations admit a flat vacuum space-time with nonzero cosmological constant if and only if the canonical LNH is valid. This allows an interpretation of the LNH phenomena in terms of a time-dependent vacuum state. If this is correct then the cosmological constant must be positive.
Rabi-vibronic resonance with large number of vibrational quanta
Glenn, R.; Raikh, M. E.
2011-01-01
We study theoretically the Rabi oscillations of a resonantly driven two-level system linearly coupled to a harmonic oscillator (vibrational mode) with frequency, \\omega_0. We show that for weak coupling, \\omega_p \\ll \\omega_0, where \\omega_p is the polaronic shift, Rabi oscillations are strongly modified in the vicinity of the Rabi-vibronic resonance \\Omega_R = \\omega_0, where \\Omega_R is the Rabi frequency. The width of the resonance is (\\Omega_R-\\omega_0) \\sim \\omega_p^{2/3} \\omega_0^{1/3} ...
Our prescription drugs kill us in large numbers
DEFF Research Database (Denmark)
Gøtzsche, Peter C
2014-01-01
Our prescription drugs are the third leading cause of death after heart disease and cancer in the United States and Europe. Around half of those who die have taken their drugs correctly; the other half die because of errors, such as too high a dose or use of a drug despite contraindications. Our...
Directory of Open Access Journals (Sweden)
T. Pathinathan
2015-01-01
Full Text Available In this paper we define diamond fuzzy number with the help of triangular fuzzy number. We include basic arithmetic operations like addition, subtraction of diamond fuzzy numbers with examples. We define diamond fuzzy matrix with some matrix properties. We have defined Nested diamond fuzzy number and Linked diamond fuzzy number. We have further classified Right Linked Diamond Fuzzy number and Left Linked Diamond Fuzzy number. Finally we have verified the arithmetic operations for the above mentioned types of Diamond Fuzzy Numbers.
From Calculus to Number Theory
Indian Academy of Sciences (India)
A. Raghuram
2016-11-04
Nov 4, 2016 ... diverges to infinity. This means given any number M, however large, we can add sufficiently many terms in the above series to make the sum larger than M. This was first proved by Nicole Oresme (1323-1382), a brilliant. French philosopher of his times.
DEFF Research Database (Denmark)
Simonsen, Karina T; Gallego, Sandra F; Færgeman, Nils J.
2012-01-01
alterations on the transcriptional and translational levels. Early on, researchers took advantage of the possibility to conduct large-scale investigations of the C. elegans immune response. Multiple studies demonstrated that C. elegans does indeed mount a protective response against invading pathogens, thus...... rendering this small nematode a very useful and simple host model for the study of innate immunity and host-pathogen interactions. Here, we provide an overview of key aspects of innate immunity in C. elegans revealed by recent whole-genome transcriptomics and proteomics studies of the global response of C......For more than ten years the nematode Caenorhabditis elegans has proven to be a valuable model for studies of the host response to various bacterial and fungal pathogens. When exposed to a pathogenic organism, a clear response is elicited in the nematode, which is characterized by specific...
Management systems for service providers
International Nuclear Information System (INIS)
Bolokonya, Herbert Chiwalo
2015-02-01
In the field of radiation safety and protection there are a number of institutions that are involved in achieving different goals and strategies. These strategies and objectives are achieved based on a number of tools and systems, one of these tools and systems is the use of a management system. This study aimed at reviewing the management system concept for Technical Service Providers in the field of radiation safety and protection. The main focus was on personal monitoring services provided by personal dosimetry laboratories. A number of key issues were found to be prominent to make the management system efficient. These are laboratory accreditation, approval; having a customer driven operating criteria; and controlling of records and good reporting. (au)
Burkhart, Jerry
2009-01-01
Prime numbers are often described as the "building blocks" of natural numbers. This article shows how the author and his students took this idea literally by using prime factorizations to build numbers with blocks. In this activity, students explore many concepts of number theory, including the relationship between greatest common factors and…
Vazzana, Anthony; Garth, David
2007-01-01
One of the oldest branches of mathematics, number theory is a vast field devoted to studying the properties of whole numbers. Offering a flexible format for a one- or two-semester course, Introduction to Number Theory uses worked examples, numerous exercises, and two popular software packages to describe a diverse array of number theory topics.
On the number of special numbers
Indian Academy of Sciences (India)
without loss of any generality to be the first k primes), then the equation a + b = c has .... This is an elementary exercise in partial summation (see [12]). Thus ... This is easily done by inserting a stronger form of the prime number theorem into the.
A large electrically excited synchronous generator
DEFF Research Database (Denmark)
2014-01-01
This invention relates to a large electrically excited synchronous generator (100), comprising a stator (101), and a rotor or rotor coreback (102) comprising an excitation coil (103) generating a magnetic field during use, wherein the rotor or rotor coreback (102) further comprises a plurality...... adjacent neighbouring poles. In this way, a large electrically excited synchronous generator (EESG) is provided that readily enables a relatively large number of poles, compared to a traditional EESG, since the excitation coil in this design provides MMF for all the poles, whereas in a traditional EESG...... each pole needs its own excitation coil, which limits the number of poles as each coil will take up too much space between the poles....
Classical theory of algebraic numbers
Ribenboim, Paulo
2001-01-01
Gauss created the theory of binary quadratic forms in "Disquisitiones Arithmeticae" and Kummer invented ideals and the theory of cyclotomic fields in his attempt to prove Fermat's Last Theorem These were the starting points for the theory of algebraic numbers, developed in the classical papers of Dedekind, Dirichlet, Eisenstein, Hermite and many others This theory, enriched with more recent contributions, is of basic importance in the study of diophantine equations and arithmetic algebraic geometry, including methods in cryptography This book has a clear and thorough exposition of the classical theory of algebraic numbers, and contains a large number of exercises as well as worked out numerical examples The Introduction is a recapitulation of results about principal ideal domains, unique factorization domains and commutative fields Part One is devoted to residue classes and quadratic residues In Part Two one finds the study of algebraic integers, ideals, units, class numbers, the theory of decomposition, iner...
It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers
Goodwin, Chris; Ortiz, Enrique
2015-01-01
Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…
Grešak, Rozalija
2015-01-01
The field of real numbers is usually constructed using Dedekind cuts. In these thesis we focus on the construction of the field of real numbers using metric completion of rational numbers using Cauchy sequences. In a similar manner we construct the field of p-adic numbers, describe some of their basic and topological properties. We follow by a construction of complex p-adic numbers and we compare them with the ordinary complex numbers. We conclude the thesis by giving a motivation for the int...
Compendium of Experimental Cetane Numbers
Energy Technology Data Exchange (ETDEWEB)
Yanowitz, Janet [Ecoengineering, Sharonville, OH (United States); Ratcliff, Matthew A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); McCormick, Robert L. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Taylor, J. D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Murphy, M. J. [Battelle, Columbus, OH (United States)
2017-02-22
This report is an updated version of the 2014 Compendium of Experimental Cetane Number Data and presents a compilation of measured cetane numbers for pure chemical compounds. It includes all available single-compound cetane number data found in the scientific literature up until December 2016 as well as a number of previously unpublished values, most measured over the past decade at the National Renewable Energy Laboratory. This version of the compendium contains cetane values for 496 pure compounds, including 204 hydrocarbons and 292 oxygenates. 176 individual measurements are new to this version of the compendium, all of them collected using ASTM Method D6890, which utilizes an Ignition Quality Tester (IQT) a type of constant-volume combustion chamber. For many compounds, numerous measurements are included, often collected by different researchers using different methods. The text of this document is unchanged from the 2014 version, except for the numbers of compounds in Section 3.1, the Appendices, Table 1. Primary Cetane Number Data Sources and Table 2. Number of Measurements Included in Compendium. Cetane number is a relative ranking of a fuel's autoignition characteristics for use in compression ignition engines. It is based on the amount of time between fuel injection and ignition, also known as ignition delay. The cetane number is typically measured either in a single-cylinder engine or a constant-volume combustion chamber. Values in the previous compendium derived from octane numbers have been removed and replaced with a brief analysis of the correlation between cetane numbers and octane numbers. The discussion on the accuracy and precision of the most commonly used methods for measuring cetane number has been expanded, and the data have been annotated extensively to provide additional information that will help the reader judge the relative reliability of individual results.
On the number of special numbers
Indian Academy of Sciences (India)
We now apply the theory of the Thue equation to obtain an effective bound on m. Indeed, by Lemma 3.2, we can write m2 = ba3 and m2 − 4 = cd3 with b, c cubefree. By the above, both b, c are bounded since they are cubefree and all their prime factors are less than e63727. Now we have a finite number of. Thue equations:.
International Nuclear Information System (INIS)
Kaneko, K.
1987-01-01
A relationship between the number projection and the shell model methods is investigated in the case of a single-j shell. We can find a one-to-one correspondence between the number projected and the shell model states
Gallistel, C R
2017-12-01
The representation of discrete and continuous quantities appears to be ancient and pervasive in animal brains. Because numbers are the natural carriers of these representations, we may discover that in brains, it's numbers all the way down.
Medical service provider networks.
Mougeot, Michel; Naegelen, Florence
2018-05-17
In many countries, health insurers or health plans choose to contract either with any willing providers or with preferred providers. We compare these mechanisms when two medical services are imperfect substitutes in demand and are supplied by two different firms. In both cases, the reimbursement is higher when patients select the in-network provider(s). We show that these mechanisms yield lower prices, lower providers' and insurer's profits, and lower expense than in the uniform-reimbursement case. Whatever the degree of product differentiation, a not-for-profit insurer should prefer selective contracting and select a reimbursement such that the out-of-pocket expense is null. Although all providers join the network under any-willing-provider contracting in the absence of third-party payment, an asymmetric equilibrium may exist when this billing arrangement is implemented. Copyright © 2018 John Wiley & Sons, Ltd.
Random number generation and creativity.
Bains, William
2008-01-01
A previous paper suggested that humans can generate genuinely random numbers. I tested this hypothesis by repeating the experiment with a larger number of highly numerate subjects, asking them to call out a sequence of digits selected from 0 through 9. The resulting sequences were substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number, in line with previous literature. However, the previous literature suggests that humans generate random numbers with substantial conscious effort, and distractions which reduce that effort reduce the randomness of the numbers. I reduced my subjects' concentration by asking them to call out in another language, and with alcohol - neither affected the randomness of their responses. This suggests that the ability to generate random numbers is a 'basic' function of the human mind, even if those numbers are not mathematically 'random'. I hypothesise that there is a 'creativity' mechanism, while not truly random, provides novelty as part of the mind's defence against closed programming loops, and that testing for the effects seen here in people more or less familiar with numbers or with spontaneous creativity could identify more features of this process. It is possible that training to perform better at simple random generation tasks could help to increase creativity, through training people to reduce the conscious mind's suppression of the 'spontaneous', creative response to new questions.
DEFF Research Database (Denmark)
Andersen, Torben
2014-01-01
had a marked singular and an unmarked plural. Synchronically, however, the singular is arguably the basic member of the number category as revealed by the use of the two numbers. In addition, some nouns have a collective form, which is grammatically singular. Number also plays a role...
DEFF Research Database (Denmark)
Elvik, Rune; Bjørnskau, Torkel
2017-01-01
Highlights •26 studies of the safety-in-numbers effect are reviewed. •The existence of a safety-in-numbers effect is confirmed. •Results are consistent. •Causes of the safety-in-numbers effect are incompletely known....
de Mestre, Neville
2008-01-01
Prime numbers are important as the building blocks for the set of all natural numbers, because prime factorisation is an important and useful property of all natural numbers. Students can discover them by using the method known as the Sieve of Eratosthenes, named after the Greek geographer and astronomer who lived from c. 276-194 BC. Eratosthenes…
Observer variability in estimating numbers: An experiment
Erwin, R.M.
1982-01-01
Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.
CERN experiment provides first glimpse inside cold antihydrogen
2002-01-01
"The ATRAP experiment at the Antiproton Decelerator at CERN has detected and measured large numbers of cold antihydrogen atoms. Relying on ionization of the cold antiatoms when they pass through a strong electric field gradient, the ATRAP measurement provides the first glimpse inside an antiatom, and the first information about the physics of antihydrogen. The results have been accepted for publication in Physical Review Letters" (1 page).
Gauge transformations with fractional winding numbers
International Nuclear Information System (INIS)
Abouelsaood, A.
1996-01-01
The role which gauge transformations of noninteger winding numbers might play in non-Abelian gauge theories is studied. The phase factor acquired by the semiclassical physical states in an arbitrary background gauge field when they undergo a gauge transformation of an arbitrary real winding number is calculated in the path integral formalism assuming that a θFF term added to the Lagrangian plays the same role as in the case of integer winding numbers. Requiring that these states provide a representation of the group of open-quote open-quote large close-quote close-quote gauge transformations, a condition on the allowed backgrounds is obtained. It is shown that this representability condition is only satisfied in the monopole sector of a spontaneously broken gauge theory, but not in the vacuum sector of an unbroken or a spontaneously broken non-Abelian gauge theory. It is further shown that the recent proof of the vanishing of the θ parameter when gauge transformations of arbitrary fractional winding numbers are allowed breaks down in precisely those cases where the representability condition is obeyed because certain gauge transformations needed for the proof, and whose existence is assumed, are either spontaneously broken or cannot be globally defined as a result of a topological obstruction. copyright 1996 The American Physical Society
Number to finger mapping is topological.
Plaisier, M.A.; Smeets, J.B.J.
2011-01-01
It has been shown that humans associate fingers with numbers because finger counting strategies interact with numerical judgements. At the same time, there is evidence that there is a relation between number magnitude and space as small to large numbers seem to be represented from left to right. In
Niederreiter, Harald
2015-01-01
This textbook effectively builds a bridge from basic number theory to recent advances in applied number theory. It presents the first unified account of the four major areas of application where number theory plays a fundamental role, namely cryptography, coding theory, quasi-Monte Carlo methods, and pseudorandom number generation, allowing the authors to delineate the manifold links and interrelations between these areas. Number theory, which Carl-Friedrich Gauss famously dubbed the queen of mathematics, has always been considered a very beautiful field of mathematics, producing lovely results and elegant proofs. While only very few real-life applications were known in the past, today number theory can be found in everyday life: in supermarket bar code scanners, in our cars’ GPS systems, in online banking, etc. Starting with a brief introductory course on number theory in Chapter 1, which makes the book more accessible for undergraduates, the authors describe the four main application areas in Chapters...
Providing free autopoweroff plugs
DEFF Research Database (Denmark)
Jensen, Carsten Lynge; Hansen, Lars Gårn; Fjordbak, Troels
2012-01-01
Experimental evidence of the effect of providing households with cheap energy saving technology is sparse. We present results from a field experiment in which autopoweroff plugs were provided free of charge to randomly selected households. We use propensity score matching to find treatment effects...
Providing solutions to engineering problems
International Nuclear Information System (INIS)
Connop, R.P.P.
1991-01-01
BNFL has acquired unique experience over a period of 40 years in specifying, designing and constructing spent fuel reprocessing and associated waste management plant. This experience is currently used to support a pound 5.5 billion capital investment programme. This paper reviews a number of engineering problems and their solutions to highlight BNFL experience in providing comprehensive specification, design and engineering and project management services. (author)
Deuschel, Jean-Dominique; Deuschel, Jean-Dominique
2001-01-01
This is the second printing of the book first published in 1988. The first four chapters of the volume are based on lectures given by Stroock at MIT in 1987. They form an introduction to the basic ideas of the theory of large deviations and make a suitable package on which to base a semester-length course for advanced graduate students with a strong background in analysis and some probability theory. A large selection of exercises presents important material and many applications. The last two chapters present various non-uniform results (Chapter 5) and outline the analytic approach that allow
DEFF Research Database (Denmark)
Jørgensen, Claus Bjørn; Suetens, Sigrid; Tyran, Jean-Robert
numbers based on recent drawings. While most players pick the same set of numbers week after week without regards of numbers drawn or anything else, we find that those who do change, act on average in the way predicted by the law of small numbers as formalized in recent behavioral theory. In particular......We investigate the “law of small numbers” using a unique panel data set on lotto gambling. Because we can track individual players over time, we can measure how they react to outcomes of recent lotto drawings. We can therefore test whether they behave as if they believe they can predict lotto......, on average they move away from numbers that have recently been drawn, as suggested by the “gambler’s fallacy”, and move toward numbers that are on streak, i.e. have been drawn several weeks in a row, consistent with the “hot hand fallacy”....
Ore, Oystein
2017-01-01
Number theory is the branch of mathematics concerned with the counting numbers, 1, 2, 3, … and their multiples and factors. Of particular importance are odd and even numbers, squares and cubes, and prime numbers. But in spite of their simplicity, you will meet a multitude of topics in this book: magic squares, cryptarithms, finding the day of the week for a given date, constructing regular polygons, pythagorean triples, and many more. In this revised edition, John Watkins and Robin Wilson have updated the text to bring it in line with contemporary developments. They have added new material on Fermat's Last Theorem, the role of computers in number theory, and the use of number theory in cryptography, and have made numerous minor changes in the presentation and layout of the text and the exercises.
Experimental determination of Ramsey numbers.
Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank
2013-09-27
Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.
Godefroy, Gilles
2004-01-01
Numbers are fascinating. The fascination begins in childhood, when we first learn to count. It continues as we learn arithmetic, algebra, geometry, and so on. Eventually, we learn that numbers not only help us to measure the world, but also to understand it and, to some extent, to control it. In The Adventure of Numbers, Gilles Godefroy follows the thread of our expanding understanding of numbers to lead us through the history of mathematics. His goal is to share the joy of discovering and understanding this great adventure of the mind. The development of mathematics has been punctuated by a n
DEFF Research Database (Denmark)
Suetens, Sigrid; Galbo-Jørgensen, Claus B.; Tyran, Jean-Robert Karl
2016-01-01
We investigate the ‘law of small numbers’ using a data set on lotto gambling that allows us to measure players’ reactions to draws. While most players pick the same set of numbers week after week, we find that those who do change react on average as predicted by the law of small numbers...... as formalized in recent behavioral theory. In particular, players tend to bet less on numbers that have been drawn in the preceding week, as suggested by the ‘gambler’s fallacy’, and bet more on a number if it was frequently drawn in the recent past, consistent with the ‘hot-hand fallacy’....
Intuitive numbers guide decisions
Directory of Open Access Journals (Sweden)
Ellen Peters
2008-12-01
Full Text Available Measuring reaction times to number comparisons is thought to reveal a processing stage in elementary numerical cognition linked to internal, imprecise representations of number magnitudes. These intuitive representations of the mental number line have been demonstrated across species and human development but have been little explored in decision making. This paper develops and tests hypotheses about the influence of such evolutionarily ancient, intuitive numbers on human decisions. We demonstrate that individuals with more precise mental-number-line representations are higher in numeracy (number skills consistent with previous research with children. Individuals with more precise representations (compared to those with less precise representations also were more likely to choose larger, later amounts over smaller, immediate amounts, particularly with a larger proportional difference between the two monetary outcomes. In addition, they were more likely to choose an option with a larger proportional but smaller absolute difference compared to those with less precise representations. These results are consistent with intuitive number representations underlying: a perceived differences between numbers, b the extent to which proportional differences are weighed in decisions, and, ultimately, c the valuation of decision options. Human decision processes involving numbers important to health and financial matters may be rooted in elementary, biological processes shared with other species.
Hirst, Keith
1994-01-01
Number and geometry are the foundations upon which mathematics has been built over some 3000 years. This book is concerned with the logical foundations of number systems from integers to complex numbers. The author has chosen to develop the ideas by illustrating the techniques used throughout mathematics rather than using a self-contained logical treatise. The idea of proof has been emphasised, as has the illustration of concepts from a graphical, numerical and algebraic point of view. Having laid the foundations of the number system, the author has then turned to the analysis of infinite proc
Credential Service Provider (CSP)
Department of Veterans Affairs — Provides a VA operated Level 1 and Level 2 credential for individuals who require access to VA applications, yet cannot obtain a credential from another VA accepted...
U.S. Department of Health & Human Services — The MAX Provider Characteristics (PC) File Implementation Report describes the design, implementation, and results of the MAXPC prototype, which was based on three...
Lepton family number violation
International Nuclear Information System (INIS)
Herczeg, P.
1999-01-01
At present there is evidence from neutrino oscillation searches that the neutrinos are in fact massive particles and that they mix. If confirmed, this would imply that the conservation of LFN is not exact. Lepton family number violation (LFNV) has been searched for with impressive sensitivities in many processes involving charged leptons. The present experimental limits on some of them (those which the author shall consider here) are shown in Table 1. These stringent limits are not inconsistent with the neutrino oscillation results since, given the experimental bounds on the masses of the known neutrinos and the neutrino mass squared differences required by the oscillation results, the effects of LFNV from neutrino mixing would be too small to be seen elsewhere (see Section 2). The purpose of experiments searching for LFNV involving the charged leptons is to probe the existence of other sources of LFNV. Such sources are present in many extensions of the SM. In this lecture the author shall discuss some of the possibilities, focusing on processes that require muon beams. Other LFNV processes, such as the decays of the kaons and of the τ, provide complementary information. In the next Section he shall consider some sources of LFNV that do not require an extension of the gauge group of the SM (the added leptons or Higgs bosons may of course originate from models with extended gauge groups). In Section 3 he discusses LFNV in left-right symmetric models. In Section 4 he considers LFNV in supersymmetric models, first in R-parity conserving supersymmetric grand unified models, and then in the minimal supersymmetric standard model with R-parity violation. The last section is a brief summary of the author's conclusions
Energy Technology Data Exchange (ETDEWEB)
Jung, Hannes [DESY, Hamburg (Germany); De Roeck, Albert [CERN, Genf (Switzerland); Bartles, Jochen [Univ. Hamburg (DE). Institut fuer Theoretische Physik II] (and others)
2008-09-15
More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. (orig.)
International Nuclear Information System (INIS)
Jung, Hannes; De Roeck, Albert; Bartles, Jochen
2008-09-01
More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. (orig.)
U.S. Department of Health & Human Services — The POS file consists of two data files, one for CLIA labs and one for 18 other provider types. The file names are CLIA and OTHER. If downloading the file, note it...
International Nuclear Information System (INIS)
Coveyou, R.R.
1974-01-01
The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)
Weiss, Edwin
1998-01-01
Careful organization and clear, detailed proofs characterize this methodical, self-contained exposition of basic results of classical algebraic number theory from a relatively modem point of view. This volume presents most of the number-theoretic prerequisites for a study of either class field theory (as formulated by Artin and Tate) or the contemporary treatment of analytical questions (as found, for example, in Tate's thesis).Although concerned exclusively with algebraic number fields, this treatment features axiomatic formulations with a considerable range of applications. Modem abstract te
Cohn, Harvey
1980-01-01
""A very stimulating book ... in a class by itself."" - American Mathematical MonthlyAdvanced students, mathematicians and number theorists will welcome this stimulating treatment of advanced number theory, which approaches the complex topic of algebraic number theory from a historical standpoint, taking pains to show the reader how concepts, definitions and theories have evolved during the last two centuries. Moreover, the book abounds with numerical examples and more concrete, specific theorems than are found in most contemporary treatments of the subject.The book is divided into three parts
Crossley, John N
1987-01-01
This book presents detailed studies of the development of three kinds of number. In the first part the development of the natural numbers from Stone-Age times right up to the present day is examined not only from the point of view of pure history but also taking into account archaeological, anthropological and linguistic evidence. The dramatic change caused by the introduction of logical theories of number in the 19th century is also treated and this part ends with a non-technical account of the very latest developments in the area of Gödel's theorem. The second part is concerned with the deve
Professor Stewart's incredible numbers
Stewart, Ian
2015-01-01
Ian Stewart explores the astonishing properties of numbers from 1 to10 to zero and infinity, including one figure that, if you wrote it out, would span the universe. He looks at every kind of number you can think of - real, imaginary, rational, irrational, positive and negative - along with several you might have thought you couldn't think of. He explains the insights of the ancient mathematicians, shows how numbers have evolved through the ages, and reveals the way numerical theory enables everyday life. Under Professor Stewart's guidance you will discover the mathematics of codes,
LeVeque, William J
1996-01-01
This excellent textbook introduces the basics of number theory, incorporating the language of abstract algebra. A knowledge of such algebraic concepts as group, ring, field, and domain is not assumed, however; all terms are defined and examples are given - making the book self-contained in this respect.The author begins with an introductory chapter on number theory and its early history. Subsequent chapters deal with unique factorization and the GCD, quadratic residues, number-theoretic functions and the distribution of primes, sums of squares, quadratic equations and quadratic fields, diopha
Kneusel, Ronald T
2015-01-01
This is a book about numbers and how those numbers are represented in and operated on by computers. It is crucial that developers understand this area because the numerical operations allowed by computers, and the limitations of those operations, especially in the area of floating point math, affect virtually everything people try to do with computers. This book aims to fill this gap by exploring, in sufficient but not overwhelming detail, just what it is that computers do with numbers. Divided into two parts, the first deals with standard representations of integers and floating point numb
Sierpinski, Waclaw
1988-01-01
Since the publication of the first edition of this work, considerable progress has been made in many of the questions examined. This edition has been updated and enlarged, and the bibliography has been revised.The variety of topics covered here includes divisibility, diophantine equations, prime numbers (especially Mersenne and Fermat primes), the basic arithmetic functions, congruences, the quadratic reciprocity law, expansion of real numbers into decimal fractions, decomposition of integers into sums of powers, some other problems of the additive theory of numbers and the theory of Gaussian
Corry, Leo
2015-01-01
The world around us is saturated with numbers. They are a fundamental pillar of our modern society, and accepted and used with hardly a second thought. But how did this state of affairs come to be? In this book, Leo Corry tells the story behind the idea of number from the early days of the Pythagoreans, up until the turn of the twentieth century. He presents an overview of how numbers were handled and conceived in classical Greek mathematics, in the mathematics of Islam, in European mathematics of the middle ages and the Renaissance, during the scientific revolution, all the way through to the
Dudley, Underwood
2008-01-01
Ideal for a first course in number theory, this lively, engaging text requires only a familiarity with elementary algebra and the properties of real numbers. Author Underwood Dudley, who has written a series of popular mathematics books, maintains that the best way to learn mathematics is by solving problems. In keeping with this philosophy, the text includes nearly 1,000 exercises and problems-some computational and some classical, many original, and some with complete solutions. The opening chapters offer sound explanations of the basics of elementary number theory and develop the fundamenta
Source-Independent Quantum Random Number Generation
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
Source-Independent Quantum Random Number Generation
Directory of Open Access Journals (Sweden)
Zhu Cao
2016-02-01
Full Text Available Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5×10^{3} bit/s.
African Journals Online (AJOL)
OLUWOLE
Agro-Science Journal of Tropical Agriculture, Food, Environment and Extension. Volume 9 Number 1 ... of persistent dumping of cheap subsidized food imports from developed ... independence of the inefficiency effects in the two estimation ...
International Development Research Centre (IDRC) Digital Library (Canada)
Operating a Demographic Surveillance System (DSS) like this one requires a blend of high-tech number-crunching ability and .... views follow a standardized format that takes several ... general levels of health and to the use of health services.
Solar Indices - Sunspot Numbers
National Oceanic and Atmospheric Administration, Department of Commerce — Collection includes a variety of indices related to solar activity contributed by a number of national and private solar observatories located worldwide. This...
Schwartz, Richard Evan
2014-01-01
In the American Mathematical Society's first-ever book for kids (and kids at heart), mathematician and author Richard Evan Schwartz leads math lovers of all ages on an innovative and strikingly illustrated journey through the infinite number system. By means of engaging, imaginative visuals and endearing narration, Schwartz manages the monumental task of presenting the complex concept of Big Numbers in fresh and relatable ways. The book begins with small, easily observable numbers before building up to truly gigantic ones, like a nonillion, a tredecillion, a googol, and even ones too huge for names! Any person, regardless of age, can benefit from reading this book. Readers will find themselves returning to its pages for a very long time, perpetually learning from and growing with the narrative as their knowledge deepens. Really Big Numbers is a wonderful enrichment for any math education program and is enthusiastically recommended to every teacher, parent and grandparent, student, child, or other individual i...
Building Service Provider Capabilities
DEFF Research Database (Denmark)
Brandl, Kristin; Jaura, Manya; Ørberg Jensen, Peter D.
2015-01-01
In this paper we study whether and how the interaction between clients and the service providers contributes to the development of capabilities in service provider firms. In situations where such a contribution occurs, we analyze how different types of activities in the production process...... process. We find that clients influence the development of human capital capabilities and management capabilities in reciprocally produced services. While in sequential produced services clients influence the development of organizational capital capabilities and management capital capabilities....... of the services, such as sequential or reciprocal task activities, influence the development of different types of capabilities. We study five cases of offshore-outsourced knowledge-intensive business services that are distinguished according to their reciprocal or sequential task activities in their production...
Indian Academy of Sciences (India)
One could endlessly churn out congruent numbers following the method in Box 1 without being certain when a given number n (or n x m 2, for some integer m) will ap- pear on the list. Continuing in this way ·would exhaust one's computing resources, not to mention one's patience! Also, this procedure is of no avail if n is not ...
Directory of Open Access Journals (Sweden)
Saveliev Peter
2005-01-01
Full Text Available Suppose , are manifolds, are maps. The well-known coincidence problem studies the coincidence set . The number is called the codimension of the problem. More general is the preimage problem. For a map and a submanifold of , it studies the preimage set , and the codimension is . In case of codimension , the classical Nielsen number is a lower estimate of the number of points in changing under homotopies of , and for an arbitrary codimension, of the number of components of . We extend this theory to take into account other topological characteristics of . The goal is to find a "lower estimate" of the bordism group of . The answer is the Nielsen group defined as follows. In the classical definition, the Nielsen equivalence of points of based on paths is replaced with an equivalence of singular submanifolds of based on bordisms. We let , then the Nielsen group of order is the part of preserved under homotopies of . The Nielsen number of order is the rank of this group (then . These numbers are new obstructions to removability of coincidences and preimages. Some examples and computations are provided.
Large Scale Self-Organizing Information Distribution System
National Research Council Canada - National Science Library
Low, Steven
2005-01-01
This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...
Number of core samples: Mean concentrations and confidence intervals
International Nuclear Information System (INIS)
Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.
1995-01-01
This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications
Providing Compassion through Flow
Directory of Open Access Journals (Sweden)
Lydia Royeen
2015-07-01
Full Text Available Meg Kral, MS, OTR/L, CLT, is the cover artist for the Summer 2015 issue of The Open Journal of Occupational Therapy. Her untitled piece of art is an oil painting and is a re-creation of a photograph taken while on vacation. Meg is currently supervisor of outpatient services at Rush University Medical Center. She is lymphedema certified and has a specific interest in breast cancer lymphedema. Art and occupational therapy serve similar purposes for Meg: both provide a sense of flow. She values the outcomes, whether it is a piece of art or improved functional status
Quantum random number generator
Soubusta, Jan; Haderka, Ondrej; Hendrych, Martin
2001-03-01
Since reflection or transmission of a quantum particle on a beamsplitter is inherently random quantum process, a device built on this principle does not suffer from drawbacks of neither pseudo-random computer generators or classical noise sources. Nevertheless, a number of physical conditions necessary for high quality random numbers generation must be satisfied. Luckily, in quantum optics realization they can be well controlled. We present an easy random number generator based on the division of weak light pulses on a beamsplitter. The randomness of the generated bit stream is supported by passing the data through series of 15 statistical test. The device generates at a rate of 109.7 kbit/s.
Alizée Dauvergne
2010-01-01
What makes the LHC the biggest particle accelerator in the world? Here are some of the numbers that characterise the LHC, and their equivalents in terms that are easier for us to imagine. Feature Number Equivalent Circumference ~ 27 km Distance covered by beam in 10 hours ~ 10 billion km a round trip to Neptune Number of times a single proton travels around the ring each second 11 245 Speed of protons first entering the LHC 299 732 500 m/s 99.9998 % of the speed of light Speed of protons when they collide 299 789 760 m/s 99.9999991 % of the speed of light Collision temperature ~ 1016 °C ove...
Energy providers: customer expectations
International Nuclear Information System (INIS)
Pridham, N.F.
1997-01-01
The deregulation of the gas and electric power industries, and how it will impact on customer service and pricing rates was discussed. This paper described the present situation, reviewed core competencies, and outlined future expectations. The bottom line is that major energy consumers are very conscious of energy costs and go to great lengths to keep them under control. At the same time, solutions proposed to reduce energy costs must benefit all classes of consumers, be they industrial, commercial, institutional or residential. Deregulation and competition at an accelerated pace is the most likely answer. This may be forced by external forces such as foreign energy providers who are eager to enter the Canadian energy market. It is also likely that the competition and convergence between gas and electricity is just the beginning, and may well be overshadowed by other deregulated industries as they determine their core competencies
Samuel, Pierre
2008-01-01
Algebraic number theory introduces students not only to new algebraic notions but also to related concepts: groups, rings, fields, ideals, quotient rings and quotient fields, homomorphisms and isomorphisms, modules, and vector spaces. Author Pierre Samuel notes that students benefit from their studies of algebraic number theory by encountering many concepts fundamental to other branches of mathematics - algebraic geometry, in particular.This book assumes a knowledge of basic algebra but supplements its teachings with brief, clear explanations of integrality, algebraic extensions of fields, Gal
Iwaniec, Henryk
2004-01-01
Analytic Number Theory distinguishes itself by the variety of tools it uses to establish results, many of which belong to the mainstream of arithmetic. One of the main attractions of analytic number theory is the vast diversity of concepts and methods it includes. The main goal of the book is to show the scope of the theory, both in classical and modern directions, and to exhibit its wealth and prospects, its beautiful theorems and powerful techniques. The book is written with graduate students in mind, and the authors tried to balance between clarity, completeness, and generality. The exercis
CONFUSION WITH TELEPHONE NUMBERS
Telecom Service
2002-01-01
he area code is now required for all telephone calls within Switzerland. Unfortunately this is causing some confusion. CERN has received complaints that incoming calls intended for CERN mobile phones are being directed to private subscribers. This is caused by mistakenly dialing the WRONG code (e.g. 022) in front of the mobile number. In order to avoid these problems, please inform your correspondents that the correct numbers are: 079 201 XXXX from Switzerland; 0041 79 201 XXXX from other countries. Telecom Service
CONFUSION WITH TELEPHONE NUMBERS
Telecom Service
2002-01-01
The area code is now required for all telephone calls within Switzerland. Unfortunately this is causing some confusion. CERN has received complaints that incoming calls intended for CERN mobile phones are being directed to private subscribers. This is caused by mistakenly dialing the WRONG code (e.g. 022) in front of the mobile number. In order to avoid these problems, please inform your correspondents that the correct numbers are: 079 201 XXXX from Switzerland; 0041 79 201 XXXX from other countries. Telecom Service
An introduction to Catalan numbers
Roman, Steven
2015-01-01
This textbook provides an introduction to the Catalan numbers and their remarkable properties, along with their various applications in combinatorics. Intended to be accessible to students new to the subject, the book begins with more elementary topics before progressing to more mathematically sophisticated topics. Each chapter focuses on a specific combinatorial object counted by these numbers, including paths, trees, tilings of a staircase, null sums in Zn+1, interval structures, partitions, permutations, semiorders, and more. Exercises are included at the end of book, along with hints and solutions, to help students obtain a better grasp of the material. The text is ideal for undergraduate students studying combinatorics, but will also appeal to anyone with a mathematical background who has an interest in learning about the Catalan numbers. “Roman does an admirable job of providing an introduction to Catalan numbers of a different nature from the previous ones. He has made an excellent choice o...
The large-s field-reversed configuration experiment
International Nuclear Information System (INIS)
Hoffman, A.L.; Carey, L.N.; Crawford, E.A.; Harding, D.G.; DeHart, T.E.; McDonald, K.F.; McNeil, J.L.; Milroy, R.D.; Slough, J.T.; Maqueda, R.; Wurden, G.A.
1993-01-01
The Large-s Experiment (LSX) was built to study the formation and equilibrium properties of field-reversed configurations (FRCs) as the scale size increases. The dynamic, field-reversed theta-pinch method of FRC creation produces axial and azimuthal deformations and makes formation difficult, especially in large devices with large s (number of internal gyroradii) where it is difficult to achieve initial plasma uniformity. However, with the proper technique, these formation distortions can be minimized and are then observed to decay with time. This suggests that the basic stability and robustness of FRCs formed, and in some cases translated, in smaller devices may also characterize larger FRCs. Elaborate formation controls were included on LSX to provide the initial uniformity and symmetry necessary to minimize formation disturbances, and stable FRCs could be formed up to the design goal of s = 8. For x ≤ 4, the formation distortions decayed away completely, resulting in symmetric equilibrium FRCs with record confinement times up to 0.5 ms, agreeing with previous empirical scaling laws (τ∝sR). Above s = 4, reasonably long-lived (up to 0.3 ms) configurations could still be formed, but the initial formation distortions were so large that they never completely decayed away, and the equilibrium confinement was degraded from the empirical expectations. The LSX was only operational for 1 yr, and it is not known whether s = 4 represents a fundamental limit for good confinement in simple (no ion beam stabilization) FRCs or whether it simply reflects a limit of present formation technology. Ideally, s could be increased through flux buildup from neutral beams. Since the addition of kinetic or beam ions will probably be desirable for heating, sustainment, and further stabilization of magnetohydrodynamic modes at reactor-level s values, neutral beam injection is the next logical step in FRC development. 24 refs., 21 figs., 2 tabs
Fourier analysis in combinatorial number theory
International Nuclear Information System (INIS)
Shkredov, Il'ya D
2010-01-01
In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.
Fourier analysis in combinatorial number theory
Energy Technology Data Exchange (ETDEWEB)
Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)
2010-09-16
In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.
Chambers, David W
2008-01-01
This essay presents an alternative to the traditional view that ethics means judging individual behavior against standards of right and wrong. Instead, ethics is understood as creating ethical communities through the promises we make to each other. The "aim" of ethics is to demonstrate in our own behavior a credible willingness to work to create a mutually better world. The "game" of ethics then becomes searching for strategies that overlap with others' strategies so that we are all better for intending to act on a basis of reciprocal trust. This is a difficult process because we have partial, simultaneous, shifting, and inconsistent views of the world. But despite the reality that we each "frame" ethics in personal terms, it is still possible to create sufficient common understanding to prosper together. Large ethics does not make it a prerequisite for moral behavior that everyone adheres to a universally agreed set of ethical principles; all that is necessary is sufficient overlap in commitment to searching for better alternatives.
Large Retailers’ Financial Services
Risso, Mario
2010-01-01
Over the last few years, large retailers offering financial services have considerably grown in the financial services sector. Retailers are increasing the wideness and complexity of their offer of financial services. Large retail companies provide financial services to their customers following different strategic ways. The provision of financial services in the retailers offer is implemented in several different ways related to the strategies, the structures and the degree of financial know...
African Journals Online (AJOL)
OLUWOLE
with large succulent leaves while the male plants produce leaves that are scrawny, small and less attractive. Farmers therefore prefer the females to the males. Unfortunately, it is very difficult to differentiate the males from the females until the plants begin to flower. At this stage of development, the male plants begin to show.
Wetherell, Chris
2017-01-01
This is an edited extract from the keynote address given by Dr. Chris Wetherell at the 26th Biennial Conference of the Australian Association of Mathematics Teachers Inc. The author investigates the surprisingly rich structure that exists within a simple arrangement of numbers: the times tables.
Bell, Eric Temple
1991-01-01
From one of the foremost interpreters for lay readers of the history and meaning of mathematics: a stimulating account of the origins of mathematical thought and the development of numerical theory. It probes the work of Pythagoras, Galileo, Berkeley, Einstein, and others, exploring how ""number magic"" has influenced religion, philosophy, science, and mathematics
International Nuclear Information System (INIS)
Khan, T.A.; Baum, J.W.; Beckman, M.C.
1993-10-01
This document contains information dealing with the lessons learned from the experience of nuclear plants. In this issue the authors tried to avoid the 'tyranny' of numbers and concentrated on the main lessons learned. Topics include: filtration devices for air pollution abatement, crack repair and inspection, and remote handling equipment
Uniform random number generators
Farr, W. R.
1971-01-01
Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.
International Nuclear Information System (INIS)
1994-01-01
The key numbers of energy give statistical data related to production, consumption, and to foreign trade of each energy in the World and in France. A chapter is dedicated to environment and brings quantitative elements on pollutant emissions connected to energy uses
Trudgian, Timothy
2009-01-01
One of the difficulties in any teaching of mathematics is to bridge the divide between the abstract and the intuitive. Throughout school one encounters increasingly abstract notions, which are more and more difficult to relate to everyday experiences. This article examines a familiar approach to thinking about negative numbers, that is an…
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Typical Complexity Numbers. Say. 1000 tones,; 100 Users,; Transmission every 10 msec. Full Crosstalk cancellation would require. Full cancellation requires a matrix multiplication of order 100*100 for all the tones. 1000*100*100*100 operations every second for the ...
Indian Academy of Sciences (India)
IAS Admin
improved by Selberg [4] in 1941 who showed that a pos- ... be seen by entries of his first letter to G H Hardy in ... tary in the technical sense of the word, employed com- ..... III: On the expression of a number as a sum of primes, Acta Math.,.
Energy Technology Data Exchange (ETDEWEB)
Jung, Hannes; /DESY; De Roeck, Albert; /CERN; Bartels, Jochen; /Hamburg U., Inst. Theor. Phys. II; Behnke, Olaf; Blumlein, Johannes; /DESY; Brodsky, Stanley; /SLAC /Durham U., IPPP; Cooper-Sarkar, Amanda; /Oxford U.; Deak, Michal; /DESY; Devenish, Robin; /Oxford U.; Diehl, Markus; /DESY; Gehrmann, Thomas; /Zurich U.; Grindhammer, Guenter; /Munich, Max Planck Inst.; Gustafson, Gosta; /CERN /Lund U., Dept. Theor. Phys.; Khoze, Valery; /Durham U., IPPP; Knutsson, Albert; /DESY; Klein, Max; /Liverpool U.; Krauss, Frank; /Durham U., IPPP; Kutak, Krzysztof; /DESY; Laenen, Eric; /NIKHEF, Amsterdam; Lonnblad, Leif; /Lund U., Dept. Theor. Phys.; Motyka, Leszek; /Hamburg U., Inst. Theor. Phys. II /Birmingham U. /Southern Methodist U. /DESY /Piemonte Orientale U., Novara /CERN /Paris, LPTHE /Hamburg U. /Penn State U.
2011-11-10
More than 100 people participated in a discussion session at the DIS08 workshop on the topic What HERA may provide. A summary of the discussion with a structured outlook and list of desirable measurements and theory calculations is given. The HERA accelerator and the HERA experiments H1, HERMES and ZEUS stopped running in the end of June 2007. This was after 15 years of very successful operation since the first collisions in 1992. A total luminosity of {approx} 500 pb{sup -1} has been accumulated by each of the collider experiments H1 and ZEUS. During the years the increasingly better understood and upgraded detectors and HERA accelerator have contributed significantly to this success. The physics program remains in full swing and plenty of new results were presented at DIS08 which are approaching the anticipated final precision, fulfilling and exceeding the physics plans and the previsions of the upgrade program. Most of the analyses presented at DIS08 were still based on the so called HERA I data sample, i.e. data taken until 2000, before the shutdown for the luminosity upgrade. This sample has an integrated luminosity of {approx} 100 pb{sup -1}, and the four times larger statistics sample from HERA II is still in the process of being analyzed.
Altermatic number of categorical product of graphs
DEFF Research Database (Denmark)
Alishahi, Meysam; Hajiabolhassan, Hossein
2018-01-01
In this paper, we prove some relaxations of Hedetniemi's conjecture in terms of altermatic number and strong altermatic number of graphs, two combinatorial parameters introduced by the present authors Alishahi and Hajiabolhassan (2015) providing two sharp lower bounds for the chromatic number of ...
Graphing Powers and Roots of Complex Numbers.
Embse, Charles Vonder
1993-01-01
Using De Moivre's theorem and a parametric graphing utility, examines powers and roots of complex numbers and allows students to establish connections between the visual and numerical representations of complex numbers. Provides a program to numerically verify the roots of complex numbers. (MDH)
The algebras of large N matrix mechanics
Energy Technology Data Exchange (ETDEWEB)
Halpern, M.B.; Schwartz, C.
1999-09-16
Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.
5000 sustainable workplaces - Wood energy provides work
International Nuclear Information System (INIS)
Keel, A.
2009-01-01
This article presents the results of a study made by the Swiss Wood Energy Association on the regional and national added value resulting from large wood-fired installations in Switzerland. The number of workplaces created by these installations is also noted. Wood energy is quoted as not only being a way of using forest wastes but also as being a creator of employment. Large wood-fired heating installations are commented on and efforts to promote this type of energy supply even further are discussed. The study indicates which professions benefit from the use of wood energy and quantifies the number of workplaces per megawatt of installed power that result.
1985-02-01
of the blade. The Darrieus VAWT has more complex aerodynamics. This type of wind turbine produces power as a result of the tangential thrust as...Horizontal Axis Propeller-Type b) Verticle Axis Darrieus -Type Figure 78. Wind Turbine Configurations 0 6 Q K [_ 2 -, C 4 UJ UJ...Sailplanes 23 5.2 Wind Turbines 23 6. CONCLUDING REMARKS 24 7. RECOMMENDATIONS FOR FUTURE RESEARCH 24 REFERENCES 25 FIGURES 32 yv/ LOW REYNOLDS NUMBER
Threatened corals provide underexplored microbial habitats.
Directory of Open Access Journals (Sweden)
Shinichi Sunagawa
2010-03-01
Full Text Available Contemporary in-depth sequencing of environmental samples has provided novel insights into microbial community structures, revealing that their diversity had been previously underestimated. Communities in marine environments are commonly composed of a few dominant taxa and a high number of taxonomically diverse, low-abundance organisms. However, studying the roles and genomic information of these "rare" organisms remains challenging, because little is known about their ecological niches and the environmental conditions to which they respond. Given the current threat to coral reef ecosystems, we investigated the potential of corals to provide highly specialized habitats for bacterial taxa including those that are rarely detected or absent in surrounding reef waters. The analysis of more than 350,000 small subunit ribosomal RNA (16S rRNA sequence tags and almost 2,000 nearly full-length 16S rRNA gene sequences revealed that rare seawater biosphere members are highly abundant or even dominant in diverse Caribbean corals. Closely related corals (in the same genus/family harbored similar bacterial communities. At higher taxonomic levels, however, the similarities of these communities did not correlate with the phylogenetic relationships among corals, opening novel questions about the evolutionary stability of coral-microbial associations. Large proportions of OTUs (28.7-49.1% were unique to the coral species of origin. Analysis of the most dominant ribotypes suggests that many uncovered bacterial taxa exist in coral habitats and await future exploration. Our results indicate that coral species, and by extension other animal hosts, act as specialized habitats of otherwise rare microbes in marine ecosystems. Here, deep sequencing provided insights into coral microbiota at an unparalleled resolution and revealed that corals harbor many bacterial taxa previously not known. Given that two of the coral species investigated are listed as threatened under
Workflow management in large distributed systems
International Nuclear Information System (INIS)
Legrand, I; Newman, H; Voicu, R; Dobre, C; Grigoras, C
2011-01-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Workflow management in large distributed systems
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
LHCb: Managing Large Data Productions in LHCb
Tsaregorodtsev, A
2009-01-01
LHC experiments are producing very large volumes of data either accumulated from the detectors or generated via the Monte-Carlo modeling. The data should be processed as quickly as possible to provide users with the input for their analysis. Processing of multiple hundreds of terabytes of data necessitates generation, submission and following a huge number of grid jobs running all over the Computing Grid. Manipulation of these large and complex workloads is impossible without powerful production management tools. In LHCb, the DIRAC Production Management System (PMS) is used to accomplish this task. It enables production managers and end-users to deal with all kinds of data generation, processing and storage. Application workflow tools allow to define jobs as complex sequences of elementary application steps expressed as Directed Acyclic Graphs. Specialized databases and a number of dedicated software agents ensure automated data driven job creation and submission. The productions are accomplished by thorough ...
LeVeque, William J
2002-01-01
Classic two-part work now available in a single volume assumes no prior theoretical knowledge on reader's part and develops the subject fully. Volume I is a suitable first course text for advanced undergraduate and beginning graduate students. Volume II requires a much higher level of mathematical maturity, including a working knowledge of the theory of analytic functions. Contents range from chapters on binary quadratic forms to the Thue-Siegel-Roth Theorem and the Prime Number Theorem. Includes numerous problems and hints for their solutions. 1956 edition. Supplementary Reading. List of Symb
DEFF Research Database (Denmark)
Wanscher, Jørgen Bundgaard; Sørensen, Majken Vildrik
2006-01-01
Random numbers are used for a great variety of applications in almost any field of computer and economic sciences today. Examples ranges from stock market forecasting in economics, through stochastic traffic modelling in operations research to photon and ray tracing in graphics. The construction...... distributions into others with most of the required characteristics. In essence, a uniform sequence which is transformed into a new sequence with the required distribution. The subject of this article is to consider the well known highly uniform Halton sequence and modifications to it. The intent is to generate...
A course in mathematical statistics and large sample theory
Bhattacharya, Rabi; Patrangenaru, Victor
2016-01-01
This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...
Asymptotic numbers, asymptotic functions and distributions
International Nuclear Information System (INIS)
Todorov, T.D.
1979-07-01
The asymptotic functions are a new type of generalized functions. But they are not functionals on some space of test-functions as the distributions of Schwartz. They are mappings of the set denoted by A into A, where A is the set of the asymptotic numbers introduced by Christov. On its part A is a totally-ordered set of generalized numbers including the system of real numbers R as well as infinitesimals and infinitely large numbers. Every two asymptotic functions can be multiplied. On the other hand, the distributions have realizations as asymptotic functions in a certain sense. (author)
Microsoft Support Phone Number +1-877-353-1149(toll-free) Microsoft Helpline Phone Number
Allina willson
2018-01-01
Microsoft Helpline Phone Number Call Now: +1-877-353-1149 for Microsoft support and services. This is Trusted Microsoft Support number provide instant support.if you get any problem while using Microsoft office just relaxed because Microsoft support phone number +1-877-353-1149 is here to provide instant help of microsoft issues. Just dial our Microsoft support phone number +1-877-353-1149 and get instant online support.
Summing large-N towers in colour flow evolution
International Nuclear Information System (INIS)
Plaetzer, Simon
2013-12-01
We consider soft gluon evolution in the colour flow basis. We give explicit expressions for the colour structure of the (one-loop) soft anomalous dimension matrix for an arbitrary number of partons, and show how the successive exponentiation of classes of large-N contributions can be achieved to provide a systematic expansion of the evolution in terms of colour supressed contributions.
Leyland, P.; Lenstra, A.K.; Dodson, B.; Muffett, A.; Wagstaff, S.; Fieker, C.; Kohel, D.R.
2002-01-01
We report the factorization of a 135-digit integer by the triple-large-prime variation of the multiple polynomial quadratic sieve. Previous workers [6][10] had suggested that using more than two large primes would be counterproductive, because of the greatly increased number of false reports from
Systematic control of large computer programs
International Nuclear Information System (INIS)
Goedbloed, J.P.; Klieb, L.
1986-07-01
A package of CCL, UPDATE, and FORTRAN procedures is described which facilitates the systematic control and development of large scientific computer programs. The package provides a general tool box for this purpose which contains many conveniences for the systematic administration of files, editing, reformating of line printer output files, etc. In addition, a small number of procedures is devoted to the problem of structured development of a large computer program which is used by a group of scientists. The essence of the method is contained in three procedures N, R, and X for the creation of a new UPDATE program library, its revision, and execution, resp., and a procedure REVISE which provides a joint editor - UPDATE session which combines the advantages of the two systems, viz. speed and rigor. (Auth.)
Signals of lepton number violation
Panella, O; Srivastava, Y N
1999-01-01
The production of like-sign-dileptons (LSD), in the high energy lepton number violating ( Delta L=+2) reaction, pp to 2jets+l/sup +/l /sup +/, (l=e, mu , tau ), of interest for the experiments to be performed at the forthcoming Large Hadron Collider (LHC), is reported, taking up a composite model scenario in which the exchanged virtual composite neutrino is assumed to be a Majorana particle. Numerical estimates of the corresponding signal cross-section that implement kinematical cuts needed to suppress the standard model background, are presented which show that in some regions of the parameter space the total number of LSD events is well above the background. Assuming non-observation of the LSD signal it is found that LHC would exclude a composite Majorana neutrino up to 700 GeV (if one requires 10 events for discovery). The sensitivity of LHC experiments to the parameter space is then compared to that of the next generation of neutrinoless double beta decay ( beta beta /sub 0 nu /) experiment, GENIUS, and i...
van Gijn, J
2000-01-01
The round figure for the current year has stirred people's minds in anticipation. Numbers have acquired great significance also in today's medical science. The Paris physician Pierre Charles Alexandre Louis (1787-1872) is considered the founding father of the numerical method in medicine. At first the principle of aggregating data from different individuals aroused much resistance and even disgust: Claude Bernard was a leading figure among those who warned that one will never find a mean in nature, and that grouping findings together obscures the true relationship between biological phenomena. True enough, statistical significance is not a characteristic of nature itself. Significant differences or risk reductions do not necessarily imply clinical relevance, and results obtained in a group of patients are rarely applicable to an individual patient in the consultation room. Likewise, the health of a human being cannot be captured in biochemical, radiological or other technical measures, nor in disease-specific scales that reduce well-being to one or two digits. The editors of this journal will remain keen on publishing numerical studies that contribute to evidence-based medicine, but at the same time they will continue to foster the art of reporting illness from the point of view of the sick person.
DEFF Research Database (Denmark)
Triantafillou, Peter
2015-01-01
Like the Supreme Audit Institutions of many other OECD countries, the Danish National Audit Office has stepped up its performance auditing of public administrations and agencies in order to ensure that they provide value for money. But how do performance audits contribute to making state...... institutions change their conduct? Based on the case of the Danish National Audit Office's auditing of the quality of teaching at Danish universities, this paper seeks to show, first, that the quantification and, by the same token, simplification of the practices subjected to performance auditing are key...