Large numbers hypothesis. II - Electromagnetic radiation
Adams, P. J.
1983-01-01
This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.
The large numbers hypothesis and a relativistic theory of gravitation
International Nuclear Information System (INIS)
Lau, Y.K.; Prokhovnik, S.J.
1986-01-01
A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated
The large number hypothesis and Einstein's theory of gravitation
International Nuclear Information System (INIS)
Yun-Kau Lau
1985-01-01
In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch
Particle creation and Dirac's large number hypothesis; and Reply
International Nuclear Information System (INIS)
Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.
1976-01-01
The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)
The large numbers hypothesis and the Einstein theory of gravitation
International Nuclear Information System (INIS)
Dirac, P.A.M.
1979-01-01
A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)
Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis
International Nuclear Information System (INIS)
Qadir, A.; Mufti, A.A.
1980-07-01
Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)
Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis
International Nuclear Information System (INIS)
Simaciu, I.; Dumitrescu, G.
1993-01-01
The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)
International Nuclear Information System (INIS)
Peng Huanwu
2005-01-01
Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.
Large number discrimination by mosquitofish.
Directory of Open Access Journals (Sweden)
Christian Agrillo
Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all
A large scale test of the gaming-enhancement hypothesis
Directory of Open Access Journals (Sweden)
Andrew K. Przybylski
2016-11-01
Full Text Available A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
A large scale test of the gaming-enhancement hypothesis.
Przybylski, Andrew K; Wang, John C
2016-01-01
A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.
Prospective detection of large prediction errors: a hypothesis testing approach
International Nuclear Information System (INIS)
Ruan, Dan
2010-01-01
Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss
Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection
DEFF Research Database (Denmark)
Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus
2010-01-01
a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...
Modified large number theory with constant G
International Nuclear Information System (INIS)
Recami, E.
1983-01-01
The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic
Large numbers hypothesis. IV - The cosmological constant and quantum physics
Adams, P. J.
1983-01-01
In standard physics quantum field theory is based on a flat vacuum space-time. This quantum field theory predicts a nonzero cosmological constant. Hence the gravitational field equations do not admit a flat vacuum space-time. This dilemma is resolved using the units covariant gravitational field equations. This paper shows that the field equations admit a flat vacuum space-time with nonzero cosmological constant if and only if the canonical LNH is valid. This allows an interpretation of the LNH phenomena in terms of a time-dependent vacuum state. If this is correct then the cosmological constant must be positive.
Forecasting distribution of numbers of large fires
Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan
2015-01-01
Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...
Thermal convection for large Prandtl numbers
Grossmann, Siegfried; Lohse, Detlef
2001-01-01
The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer
Visual Working Memory and Number Sense: Testing the Double Deficit Hypothesis in Mathematics
Toll, Sylke W. M.; Kroesbergen, Evelyn H.; Van Luit, Johannes E. H.
2016-01-01
Background: Evidence exists that there are two main underlying cognitive factors in mathematical difficulties: working memory and number sense. It is suggested that real math difficulties appear when both working memory and number sense are weak, here referred to as the double deficit (DD) hypothesis. Aims: The aim of this study was to test the DD…
Large number discrimination in newborn fish.
Directory of Open Access Journals (Sweden)
Laura Piffer
Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.
Visual working memory and number sense: Testing the double deficit hypothesis in mathematics.
Toll, Sylke W M; Kroesbergen, Evelyn H; Van Luit, Johannes E H
2016-09-01
Evidence exists that there are two main underlying cognitive factors in mathematical difficulties: working memory and number sense. It is suggested that real math difficulties appear when both working memory and number sense are weak, here referred to as the double deficit (DD) hypothesis. The aim of this study was to test the DD hypothesis within a longitudinal time span of 2 years. A total of 670 children participated. The mean age was 4.96 years at the start of the study and 7.02 years at the end of the study. At the end of the first year of kindergarten, both visual-spatial working memory and number sense were measured by two different tasks. At the end of first grade, mathematical performance was measured with two tasks, one for math facts and one for math problems. Multiple regressions revealed that both visual working memory and symbolic number sense are predictors of mathematical performance in first grade. Symbolic number sense appears to be the strongest predictor for both math areas (math facts and math problems). Non-symbolic number sense only predicts performance in math problems. Multivariate analyses of variance showed that a combination of visual working memory and number sense deficits (NSDs) leads to the lowest performance on mathematics. Our DD hypothesis was confirmed. Both visual working memory and symbolic number sense in kindergarten are related to mathematical performance 2 years later, and a combination of visual working memory and NSDs leads to low performance in mathematical performance. © 2016 The British Psychological Society.
Forecasting distribution of numbers of large fires
Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.
2014-01-01
Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.
Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions
Stout, John Eldon
Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of
New feature for an old large number
International Nuclear Information System (INIS)
Novello, M.; Oliveira, L.R.A.
1986-01-01
A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt
Rethinking the starch digestion hypothesis for AMY1 copy number variation in humans.
Fernández, Catalina I; Wiley, Andrea S
2017-08-01
Alpha-amylase exists across taxonomic kingdoms with a deep evolutionary history of gene duplications that resulted in several α-amylase paralogs. Copy number variation (CNV) in the salivary α-amylase gene (AMY1) exists in many taxa, but among primates, humans appear to have higher average AMY1 copies than nonhuman primates. Additionally, AMY1 CNV in humans has been associated with starch content of diets, and one known function of α-amylase is its involvement in starch digestion. Thus high AMY1 CNV is considered to result from selection favoring more efficient starch digestion in the Homo lineage. Here, we present several lines of evidence that challenge the hypothesis that increased AMY1 CNV is an adaptation to starch consumption. We observe that α- amylase plays a very limited role in starch digestion, with additional steps required for starch digestion and glucose metabolism. Specifically, we note that α-amylase hydrolysis only produces a minute amount of free glucose with further enzymatic digestion and glucose absorption being rate-limiting steps for glucose availability. Indeed α-amylase is nonessential for starch digestion since sucrase-isomaltase and maltase-glucoamylase can hydrolyze whole starch granules while releasing glucose. While higher AMY1 CN and CNV among human populations may result from natural selection, existing evidence does not support starch digestion as the major selective force. We report that in humans α-amylase is expressed in several other tissues where it may have potential roles of evolutionary significance. © 2017 Wiley Periodicals, Inc.
The lore of large numbers: some historical background to the anthropic principle
International Nuclear Information System (INIS)
Barrow, J.D.
1981-01-01
A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)
Visual working memory and number sense : Testing the double deficit hypothesis in mathematics
Toll, Sylke; Kroesbergen, Evelyn; Van Luit, Johannes E H
2016-01-01
Background: Evidence exists that there are two main underlying cognitive factors in mathematical difficulties: working memory and number sense. It is suggested that real math difficulties appear when both working memory and number sense are weak, here referred to as the double deficit (DD)
Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers
Balasubramaniam, R.; Subramanian, R. S.
1996-01-01
The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.
Lapidus, Michel L
2015-08-06
This research expository article not only contains a survey of earlier work but also contains a main new result, which we first describe. Given c≥0, the spectral operator [Formula: see text] can be thought of intuitively as the operator which sends the geometry onto the spectrum of a fractal string of dimension not exceeding c. Rigorously, it turns out to coincide with a suitable quantization of the Riemann zeta function ζ=ζ(s): a=ζ(∂), where ∂=∂(c) is the infinitesimal shift of the real line acting on the weighted Hilbert space [Formula: see text]. In this paper, we establish a new asymmetric criterion for the Riemann hypothesis (RH), expressed in terms of the invertibility of the spectral operator for all values of the dimension parameter [Formula: see text] (i.e. for all c in the left half of the critical interval (0,1)). This corresponds (conditionally) to a mathematical (and perhaps also, physical) 'phase transition' occurring in the midfractal case when [Formula: see text]. Both the universality and the non-universality of ζ=ζ(s) in the right (resp., left) critical strip [Formula: see text] (resp., [Formula: see text]) play a key role in this context. These new results are presented here. We also briefly discuss earlier joint work on the complex dimensions of fractal strings, and we survey earlier related work of the author with Maier and with Herichi, respectively, in which were established symmetric criteria for the RH, expressed, respectively, in terms of a family of natural inverse spectral problems for fractal strings of Minkowski dimension D∈(0,1), with [Formula: see text], and of the quasi-invertibility of the family of spectral operators [Formula: see text] (with [Formula: see text]). © 2015 The Author(s) Published by the Royal Society. All rights reserved.
On a strong law of large numbers for monotone measures
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.
2013-01-01
Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf
A Chain Perspective on Large-scale Number Systems
Grijpink, J.H.A.M.
2012-01-01
As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena
Fatal crashes involving large numbers of vehicles and weather.
Wang, Ying; Liang, Liming; Evans, Leonard
2017-12-01
Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
On Independence for Capacities with Law of Large Numbers
Huang, Weihuan
2017-01-01
This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.
Teaching Multiplication of Large Positive Whole Numbers Using ...
African Journals Online (AJOL)
This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...
Lovelock inflation and the number of large dimensions
Ferrer, Francesc
2007-01-01
We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.
Solar test of Dirac's large number hypothesis. [multiplicative creation model for solar evolution
Chin, C.-W.; Stothers, R.
1975-01-01
An investigation is conducted regarding the implications of Dirac's theories (1973, 1974) concerning the creation of new matter. It is found that Dirac's theory of multiplicative creation, but not his theory of additive creation, is not in contradiction with known facts about the sun. According to the theory of additive creation, matter is formed uniformly throughout space. The concept of multiplicative creation implies that existing matter multiplies itself in proportion to the amount of matter already present.
A large number of stepping motor network construction by PLC
Mei, Lin; Zhang, Kai; Hongqiang, Guo
2017-11-01
In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.
Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers
Govardhan, RN; Arakeri, JH
2011-01-01
Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...
Rotating thermal convection at very large Rayleigh numbers
Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard
2016-11-01
The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.
Lepton number violation in theories with a large number of standard model copies
International Nuclear Information System (INIS)
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-01-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.
Gray, Stephen J; Gallo, David A
2016-02-01
Belief in paranormal psychic phenomena is widespread in the United States, with over a third of the population believing in extrasensory perception (ESP). Why do some people believe, while others are skeptical? According to the cognitive differences hypothesis, individual differences in the way people process information about the world can contribute to the creation of psychic beliefs, such as differences in memory accuracy (e.g., selectively remembering a fortune teller's correct predictions) or analytical thinking (e.g., relying on intuition rather than scrutinizing evidence). While this hypothesis is prevalent in the literature, few have attempted to empirically test it. Here, we provided the most comprehensive test of the cognitive differences hypothesis to date. In 3 studies, we used online screening to recruit groups of strong believers and strong skeptics, matched on key demographics (age, sex, and years of education). These groups were then tested in laboratory and online settings using multiple cognitive tasks and other measures. Our cognitive testing showed that there were no consistent group differences on tasks of episodic memory distortion, autobiographical memory distortion, or working memory capacity, but skeptics consistently outperformed believers on several tasks tapping analytical or logical thinking as well as vocabulary. These findings demonstrate cognitive similarities and differences between these groups and suggest that differences in analytical thinking and conceptual knowledge might contribute to the development of psychic beliefs. We also found that psychic belief was associated with greater life satisfaction, demonstrating benefits associated with psychic beliefs and highlighting the role of both cognitive and noncognitive factors in understanding these individual differences.
Improving CASINO performance for models with large number of electrons
International Nuclear Information System (INIS)
Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.
2009-01-01
Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation
[Dual process in large number estimation under uncertainty].
Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento
2016-08-01
According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Quasi-isodynamic configuration with large number of periods
International Nuclear Information System (INIS)
Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.
2005-01-01
It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)
Automatic trajectory measurement of large numbers of crowded objects
Li, Hui; Liu, Ye; Chen, Yan Qiu
2013-06-01
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.
Hussain, A. K. M. F.
1980-01-01
Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.
A Characterization of Hypergraphs with Large Domination Number
Directory of Open Access Journals (Sweden)
Henning Michael A.
2016-05-01
Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.
A modified large number theory with constant G
Recami, Erasmo
1983-03-01
The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.
The large lungs of elite swimmers: an increased alveolar number?
Armour, J; Donnelly, P M; Bye, P T
1993-02-01
In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or
A NICE approach to managing large numbers of desktop PC's
International Nuclear Information System (INIS)
Foster, David
1996-01-01
The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)
The Ramsey numbers of large cycles versus small wheels
Surahmat,; Baskoro, E.T.; Broersma, H.J.
2004-01-01
For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4
Turbulent flows at very large Reynolds numbers: new lessons learned
International Nuclear Information System (INIS)
Barenblatt, G I; Prostokishin, V M; Chorin, A J
2014-01-01
The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)
Chaotic scattering: the supersymmetry method for large number of channels
International Nuclear Information System (INIS)
Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.
1995-01-01
We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))
Chaotic scattering: the supersymmetry method for large number of channels
Energy Technology Data Exchange (ETDEWEB)
Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)
1995-01-23
We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))
Gentile statistics with a large maximum occupation number
International Nuclear Information System (INIS)
Dai Wusheng; Xie Mi
2004-01-01
In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics
Directory of Open Access Journals (Sweden)
Morecroft Michael D
2001-07-01
Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.
Gandiwa, E.
2013-01-01
Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until
Newbery, David M; Chuyong, George B; Zimmermann, Lukas
2006-01-01
Mast fruiting is a distinctive reproductive trait in trees. This rain forest study, at a nutrient-poor site with a seasonal climate in tropical Africa, provides new insights into the causes of this mode of phenological patterning. At Korup, Cameroon, 150 trees of the large, ectomycorrhizal caesalp, Microberlinia bisulcata, were recorded almost monthly for leafing, flowering and fruiting during 1995-2000. The series was extended to 1988-2004 with less detailed data. Individual transitions in phenology were analysed. Masting occurred when the dry season before fruiting was drier, and the one before that was wetter, than average. Intervals between events were usually 2 or 3 yr. Masting was associated with early leaf exchange, followed by mass flowering, and was highly synchronous in the population. Trees at higher elevation showed more fruiting. Output declined between 1995 and 2000. Mast fruiting in M. bisulcata appears to be driven by climate variation and is regulated by internal tree processes. The resource-limitation hypothesis was supported. An 'alternative bearing' system seems to underlie masting. That ectomycorrhizal habit facilitates masting in trees is strongly implied.
Brandt, Mark J
2013-05-01
System justification theory (SJT) posits that members of low-status groups are more likely to see their social systems as legitimate than members of high-status groups because members of low-status groups experience a sense of dissonance between system motivations and self/group motivations (Jost, Pelham, Sheldon, & Sullivan, 2003). The author examined the status-legitimacy hypothesis using data from 3 representative sets of data from the United States (American National Election Studies and General Social Surveys) and throughout the world (World Values Survey; total N across studies = 151,794). Multilevel models revealed that the average effect across years in the United States and countries throughout the world was most often directly contrary to the status-legitimacy hypothesis or was practically zero. In short, the status-legitimacy effect is not a robust phenomenon. Two theoretically relevant moderator variables (inequality and civil liberties) were also tested, revealing weak evidence, null evidence, or contrary evidence to the dissonance-inspired status-legitimacy hypothesis. In sum, the status-legitimacy effect is not robust and is unlikely to be the result of dissonance. These results are used to discuss future directions for research, the current state of SJT, and the interpretation of theoretically relevant but contrary and null results. PsycINFO Database Record (c) 2013 APA, all rights reserved
Energy Technology Data Exchange (ETDEWEB)
Sankaranarayanan, K [Rijksuniversiteit Leiden (Netherlands). Lab. voor Stralengenetica en Chemische Mutagenese; Cohen (J.A.) Instituut voor Radiopathologie en Stralenbescherming, Leiden (Netherlands))
1976-06-01
The arm number hypothesis proposed by Brewen and colleagues in 1973 has been examined in the light of information thus far available from mammalian studies. In experiments with peripheral blood lymphocytes (radiation in vitro), a linear relationship between dicentric yield and the effective chromosome arm number of the species was obtained in the mouse, Chinese hamster, goat, sheep, pig, wallaby and man. However, the data are not consistent with such a relationship in several primate species (marmoset, rhesus monkey, cynomolgus monkey, squirrel monkey and the slow loris), the cat and the dog. In the rabbit, the data are conflicting. In the mouse and the Chinese hamster the frequencies of reciprocal translocations recorded in spermatocytes descended from irradiated spermatogonia are in line with the expectation based on the arm number hypothesis, whereas in the golden hamster, rabbit and the rhesus they are not. In man and the marmoset, the limited data are not inconsistent with a 2-fold higher sensitivity of these species relative to the mouse although they do not rule out a difference as high as 4-fold. In the guinea-pig, the situation is unclear. New data on the transmission of reciprocal translocations in mice suggest that the frequency in the F/sub 1/ progeny may be close to one-quarter of that recorded in the spermatocytes of the irradiated fathers (spermatogonial irradiation) at an exposure level of 150 R, whereas at higher exposures, the reduction factor is about one-eighth, the latter being in line with the earlier finding. All these results taken together suggest that inter-specific extrapolation from the radiosensitivity of somatic cells (to dicentric induction) to that of germ cells (to translocation induction) is fraught with uncertainty at present. Certain aspects that need to be studied in more detail in the context of induced chromosome aberrations are discussed.
Characterization of General TCP Traffic under a Large Number of Flows Regime
National Research Council Canada - National Science Library
Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M
2002-01-01
.... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...
International Nuclear Information System (INIS)
Mahlstedt, J.
1977-01-01
The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de
Similarities between 2D and 3D convection for large Prandtl number
Indian Academy of Sciences (India)
2016-06-18
RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...
Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States
Ivan, Ion; Ciurea, Cristian; Pavel, Sorin
2010-01-01
The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)
Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers
Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel
2010-01-01
The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…
Greenfield, P E; Roberts, D H; Burke, B F
1980-05-02
A full 12-hour synthesis at 6-centimeter wavelength with the Very Large Array confirms the major features previously reported for the double quasar 0957+561. In addition, the existence of radio jets apparently associated with both quasars is demonstrated. Gravitational lens models are now favored on the basis of recent optical observations, and the radio jets place severe constraints on such models. Further radio observations of the double quasar are needed to establish the expected relative time delay in variations between the images.
Fuller, Nathaniel J.; Licata, Nicholas A.
2018-05-01
Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.
Secret Sharing Schemes with a large number of players from Toric Varieties
DEFF Research Database (Denmark)
Hansen, Johan P.
A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...
Klewicki, J C; Chini, G P; Gibson, J F
2017-03-13
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Energy Technology Data Exchange (ETDEWEB)
Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2013-10-31
We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.
ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2014-01-01
Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf
Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables
Directory of Open Access Journals (Sweden)
Jiangfeng Wang
2011-01-01
Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.
The three-large-primes variant of the number field sieve
S.H. Cavallar
2002-01-01
textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of
SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES
DEFF Research Database (Denmark)
Hansen, Johan Peder
2017-01-01
This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-01-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585
Optimal number of coarse-grained sites in different components of large biomolecular complexes.
Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A
2012-07-26
The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.
Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk
International Nuclear Information System (INIS)
Milinazzo, F.; Saffman, P.G.
1977-01-01
The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster
Break down of the law of large numbers in Josephson junction series arrays
International Nuclear Information System (INIS)
Dominguez, D.; Cerdeira, H.A.
1995-01-01
We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs
Breakdown of the law of large numbers in Josephson junction series arrays
International Nuclear Information System (INIS)
Dominguez, D.; Cerdeira, H.A.
1994-01-01
We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs
The holographic dual of a Riemann problem in a large number of dimensions
Energy Technology Data Exchange (ETDEWEB)
Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)
2016-08-22
We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.
Phases of a stack of membranes in a large number of dimensions of configuration space
Borelli, M. E.; Kleinert, H.
2001-05-01
The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.
Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?
Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.
2013-01-01
Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...
Loss of locality in gravitational correlators with a large number of insertions
Ghosh, Sudip; Raju, Suvrat
2017-09-01
We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.
International Nuclear Information System (INIS)
Novak Pintarič, Zorka; Kravanja, Zdravko
2015-01-01
This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.
A full picture of large lepton number asymmetries of the Universe
Energy Technology Data Exchange (ETDEWEB)
Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)
2017-04-01
A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.
Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara
2017-07-01
Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Emilie Sapin
Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.
Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours
Energy Technology Data Exchange (ETDEWEB)
Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)
2014-06-27
We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.
Law of large numbers and central limit theorem for randomly forced PDE's
Shirikyan, A
2004-01-01
We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.
On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means
Directory of Open Access Journals (Sweden)
George Livadiotis
2017-05-01
Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.
Superposition of elliptic functions as solutions for a large number of nonlinear equations
International Nuclear Information System (INIS)
Khare, Avinash; Saxena, Avadh
2014-01-01
For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations
Law of Large Numbers: the Theory, Applications and Technology-based Education.
Dinov, Ivo D; Christou, Nicolas; Gould, Robert
2009-03-01
Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).
International Nuclear Information System (INIS)
Patil, Sunil; Tafti, Danesh
2012-01-01
Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.
Conformal window in QCD for large numbers of colors and flavors
International Nuclear Information System (INIS)
Zhitnitsky, Ariel R.
2014-01-01
We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase
Vicious random walkers in the limit of a large number of walkers
International Nuclear Information System (INIS)
Forrester, P.J.
1989-01-01
The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p
Automated flow cytometric analysis across large numbers of samples and cell types.
Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno
2015-04-01
Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.
International Nuclear Information System (INIS)
Ye Peng-Cheng; Pan Guang
2015-01-01
Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
Spyropoulos, Evangelos T.; Holmes, Bayard S.
1997-01-01
The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.
System for high-voltage control detectors with large number photomultipliers
International Nuclear Information System (INIS)
Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.
1985-01-01
A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed
International Nuclear Information System (INIS)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors
Decision process in MCDM with large number of criteria and heterogeneous risk preferences
Directory of Open Access Journals (Sweden)
Jian Liu
Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision
Whelan, Simon
2007-10-01
Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.
CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs
Directory of Open Access Journals (Sweden)
Djordjevic Marko
2012-07-01
Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two
Space Situational Awareness of Large Numbers of Payloads From a Single Deployment
Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.
2014-09-01
The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft
Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers
Sadr, Reza; Cheng, Way Lee
2017-11-01
Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.
Growth of equilibrium structures built from a large number of distinct component types.
Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen
2014-09-14
We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.
On the chromatic number of triangle-free graphs of large minimum degree
DEFF Research Database (Denmark)
Thomassen, Carsten
2002-01-01
We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c
The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life
Tinungki, Georgina Maria
2018-03-01
The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.
On the chromatic number of pentagon-free graphs of large minimum degree
DEFF Research Database (Denmark)
Thomassen, Carsten
2007-01-01
We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...
On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems
National Research Council Canada - National Science Library
Tinnakornsrisuphap, Peerapol; Makowski, Armand M
2005-01-01
.... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...
DEFF Research Database (Denmark)
Jensen, Michael Vincent; Walther, Jens Honore
2013-01-01
was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....
Is the Aluminum Hypothesis Dead?
2014-01-01
The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed with concern by some of the public. This review article discusses reasons that mainstream science has largely abandoned the Aluminum Hypothesis and explores a possible reason for some in the general public continuing to view aluminum with mistrust. PMID:24806729
Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations
Energy Technology Data Exchange (ETDEWEB)
González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)
2017-06-28
We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.
Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers
Cheng, W.
2017-11-27
We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow
Skinner, Mark F; Hopwood, David
2004-03-01
Repetitive linear enamel hypoplasia (rLEH) is often observed in recent large-bodied apes from Africa and Asia as well as Mid- to Late Miocene sites from Spain to China. The ubiquity and periodicity of rLEH are not understood. Its potential as an ontogenetic marker of developmental stress in threatened species (as well as their ancient relatives) makes rLEH an important if enigmatic problem. We report research designed to show the periodicity of rLEH among West African Pan troglodytes (12 male, 32 female), Gorilla gorilla (10 male, 10 female), and Bornean and Sumatran Pongo pygmaeus (11 male, 9 female, 9 unknown) from collections in Europe. Two methods were employed. In the common chimpanzees and gorillas, the space between adjacent, macroscopically visible LEH grooves on teeth with two or more episodes was expressed as an absolute measure and as a ratio of complete unworn crown height. In the orangutans, the number of perikymata between episode onsets, as well as duration of rLEH, was determined from scanning electron micrographs of casts of incisors and canines. We conclude that stress in the form of LEH commences as early as 2.5 years of age in all taxa and lasts for several years, and even longer in orangutans; the stress is not chronic but episodic; the stressor has a strong tendency to occur in pulses of two occurrences each; and large apes from both land masses exhibit rLEH with an average periodicity of 6 months (or multiples thereof; Sumatran orangutans seem to show only annual stress), but this needs further research. This is supported by evidence of spacing between rLEH as well as perikymata counts. Duration of stress in orangutans averages about 6 weeks. Finally, the semiannual stressor transcends geographic and temporal boundaries, and is attributed to regular moisture cycles associated with the intertropical convergence zone modified by the monsoon. While seasonal cycles can influence both disease and nutritional stress, it is likely the combination of
Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling
DEFF Research Database (Denmark)
Sarlak Chivaee, Hamid
2017-01-01
This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...
A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan
Directory of Open Access Journals (Sweden)
Wu Jer-Yuarn
2008-12-01
Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.
Q-factorial Gorenstein toric Fano varieties with large Picard number
DEFF Research Database (Denmark)
Nill, Benjamin; Øbro, Mikkel
2010-01-01
In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...
A comment on "bats killed in large numbers at United States wind energy facilities"
Huso, Manuela M.P.; Dalthorp, Dan
2014-01-01
Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.
Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number
Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco
2016-11-01
A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.
Efficient high speed communications over electrical powerlines for a large number of users
Energy Technology Data Exchange (ETDEWEB)
Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering
2007-07-01
Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.
Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers
International Nuclear Information System (INIS)
Malcolm, J.; Andrews, Ph.D.
2004-01-01
This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations
Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm
2017-03-01
shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in
Analyzing the Large Number of Variables in Biomedical and Satellite Imagery
Good, Phillip I
2011-01-01
This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met
International Nuclear Information System (INIS)
Lee, Hwang; Kok, Pieter; Dowling, Jonathan P.; Cerf, Nicolas J.
2002-01-01
We propose a method for preparing maximal path entanglement with a definite photon-number N, larger than two, using projective measurements. In contrast with the previously known schemes, our method uses only linear optics. Specifically, we exhibit a way of generating four-photon, path-entangled states of the form vertical bar 4,0>+ vertical bar 0,4>, using only four beam splitters and two detectors. These states are of major interest as a resource for quantum interferometric sensors as well as for optical quantum lithography and quantum holography
Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.
2006-01-01
Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.
Dam risk reduction study for a number of large tailings dams in Ontario
Energy Technology Data Exchange (ETDEWEB)
Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)
2009-07-01
This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.
EUPAN enables pan-genome studies of a large number of eukaryotic genomes.
Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun
2017-08-01
Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Number of deaths due to lung diseases: How large is the problem?
International Nuclear Information System (INIS)
Wagener, D.K.
1990-01-01
The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings
Formation of free round jets with long laminar regions at large Reynolds numbers
Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander
2018-04-01
The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.
Directory of Open Access Journals (Sweden)
Jesús García Herrero
2003-07-01
Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.
Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations
DEFF Research Database (Denmark)
Jensen, Michael Vincent; Walther, Jens Honore
2010-01-01
Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...
On the strong law of large numbers for $\\varphi$-subgaussian random variables
Zajkowski, Krzysztof
2016-01-01
For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...
Large boson number IBM calculations and their relationship to the Bohr model
International Nuclear Information System (INIS)
Thiamova, G.; Rowe, D.J.
2009-01-01
Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)
2016-04-12
Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.
Hypothesis Designs for Three-Hypothesis Test Problems
Yan Li; Xiaolong Pu
2010-01-01
As a helpful guide for applications, the alternative hypotheses of the three-hypothesis test problems are designed under the required error probabilities and average sample number in this paper. The asymptotic formulas and the proposed numerical quadrature formulas are adopted, respectively, to obtain the hypothesis designs and the corresponding sequential test schemes under the Koopman-Darmois distributions. The example of the normal mean test shows that our methods are qu...
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Ditlevsen, Ove
2005-01-01
The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...
International Nuclear Information System (INIS)
Arvieu, R.
The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr
Directory of Open Access Journals (Sweden)
Huilin Huang
2014-01-01
Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.
Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.
2006-01-01
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods
Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects
International Nuclear Information System (INIS)
Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.
2011-01-01
Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.
Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects
Energy Technology Data Exchange (ETDEWEB)
Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)
2011-07-15
Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.
KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS
Directory of Open Access Journals (Sweden)
ENEA CONSTANTA
2016-06-01
Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism
Secondary organic aerosol formation from a large number of reactive man-made organic compounds
Energy Technology Data Exchange (ETDEWEB)
Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)
2010-07-15
A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.
Normal zone detectors for a large number of inductively coupled coils
International Nuclear Information System (INIS)
Owen, E.W.; Shimer, D.W.
1983-01-01
In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent
The Love of Large Numbers: A Popularity Bias in Consumer Choice.
Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J
2017-10-01
Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.
Normal zone detectors for a large number of inductively coupled coils. Revision 1
International Nuclear Information System (INIS)
Owen, E.W.; Shimer, D.W.
1983-01-01
In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed
Normal zone detectors for a large number of inductively coupled coils
International Nuclear Information System (INIS)
Owen, E.W.; Shimer, D.W.
1983-01-01
In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed
Alboruto, Venus M.
2017-05-01
The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.
Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian
2008-08-22
Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.
Tiselj, Iztok
2014-12-01
Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and
What caused a large number of fatalities in the Tohoku earthquake?
Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.
2012-04-01
The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association
Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram
2017-03-13
A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E
2010-05-01
Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.
Directory of Open Access Journals (Sweden)
Bao Wang
2014-01-01
Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.
On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System
Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim
2015-01-01
In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.
On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System
Makki, Behrooz
2015-11-12
In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.
International Nuclear Information System (INIS)
Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.
2012-01-01
Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.
FELICIA RAMONA BIRAU
2012-01-01
In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and...
Variability: A Pernicious Hypothesis.
Noddings, Nel
1992-01-01
The hypothesis of greater male variability in test results is discussed in its historical context, and reasons feminists have objected to the hypothesis are considered. The hypothesis acquires political importance if it is considered that variability results from biological, rather than cultural, differences. (SLD)
Directory of Open Access Journals (Sweden)
FELICIA RAMONA BIRAU
2012-05-01
Full Text Available In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and of course, the manner in which they interpret that information may be different. Also, Fractal Market Hypothesis refers to the way that liquidity and investment horizons influence the behaviour of financial investors.
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
Directory of Open Access Journals (Sweden)
KeeHyun Park
2015-01-01
Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.
Qu, Long; Nettleton, Dan; Dekkers, Jack C M
2012-12-01
Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov; Ditlevsen, Ove
2005-01-01
The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...
Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.
1974-07-15
In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.
Energy Technology Data Exchange (ETDEWEB)
Andersson, Bertil; Holmberg, Rikard
2010-08-15
This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack
Directory of Open Access Journals (Sweden)
Daniel Pettersson
2016-01-01
later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10
Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst
2015-01-01
The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935
Directory of Open Access Journals (Sweden)
Sarah Bergfelder-Drüing
Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.
Robust and distributed hypothesis testing
Gül, Gökhan
2017-01-01
This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...
Physiopathological Hypothesis of Cellulite
de Godoy, José Maria Pereira; de Godoy, Maria de Fátima Guerreiro
2009-01-01
A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct diagnosis of cellulite and the technique employed are fundamental to success. PMID:19756187
Factors associated with self-reported number of teeth in a large national cohort of Thai adults
Directory of Open Access Journals (Sweden)
Yiengprugsawan Vasoontara
2011-11-01
Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.
Life Origination Hydrate Hypothesis (LOH-Hypothesis
Directory of Open Access Journals (Sweden)
Victor Ostrovskii
2012-01-01
Full Text Available The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis, according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides, DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.
International Nuclear Information System (INIS)
Guerrero, M; Li, X Allen
2003-01-01
Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro
Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel
2017-04-01
Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For
Directory of Open Access Journals (Sweden)
Margareth Regina Dibo
2013-07-01
Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.
International Nuclear Information System (INIS)
Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.
2003-01-01
Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)
International Nuclear Information System (INIS)
Thompson, G.A.; Davies, H.M.; McDonald, N.
1985-01-01
A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application
McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M
2016-01-01
Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.
The Bergschrund Hypothesis Revisited
Sanders, J. W.; Cuffey, K. M.; MacGregor, K. R.
2009-12-01
After Willard Johnson descended into the Lyell Glacier bergschrund nearly 140 years ago, he proposed that the presence of the bergschrund modulated daily air temperature fluctuations and enhanced freeze-thaw processes. He posited that glaciers, through their ability to birth bergschrunds, are thus able to induce rapid cirque headwall retreat. In subsequent years, many researchers challenged the bergschrund hypothesis on grounds that freeze-thaw events did not occur at depth in bergschrunds. We propose a modified version of Johnson’s original hypothesis: that bergschrunds maintain subfreezing temperatures at values that encourage rock fracture via ice lensing because they act as a cold air trap in areas that would otherwise be held near zero by temperate glacial ice. In support of this claim we investigated three sections of the bergschrund at the West Washmawapta Glacier, British Columbia, Canada, which sits in an east-facing cirque. During our bergschrund reconnaissance we installed temperature sensors at multiple elevations, light sensors at depth in 2 of the 3 locations and painted two 1 m2 sections of the headwall. We first emphasize bergschrunds are not wanting for ice: verglas covers significant fractions of the headwall and icicles dangle from the base of bödens or overhanging rocks. If temperature, rather than water availability, is the limiting factor governing ice-lensing rates, our temperature records demonstrate that the bergschrund provides a suitable environment for considerable rock fracture. At the three sites (north, west, and south walls), the average temperature at depth from 9/3/2006 to 8/6/2007 was -3.6, -3.6, and -2.0 °C, respectively. During spring, when we observed vast amounts of snow melt trickle in to the bergschrund, temperatures averaged -3.7, -3.8, and -2.2 °C, respectively. Winter temperatures are even lower: -8.5, -7.3, and -2.4 °C, respectively. Values during the following year were similar. During the fall, diurnal
Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J
2016-05-31
Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of
Neggers, R.
2017-12-01
Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask
International Nuclear Information System (INIS)
Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou
2011-01-01
CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.
International Nuclear Information System (INIS)
Kun, S.Yu.
1985-01-01
On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed
Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph
Xue, Xiaofeng
2017-11-01
In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.
International Nuclear Information System (INIS)
Caldirola, P.; Recami, E.
1978-01-01
By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)
Energy Technology Data Exchange (ETDEWEB)
Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)
2011-12-15
A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)
Mohammed, Ali Ibrahim Ali
The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to
Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture
Sanfilippo, Antonio P [Richland, WA; Cowell, Andrew J [Kennewick, WA; Gregory, Michelle L [Richland, WA; Baddeley, Robert L [Richland, WA; Paulson, Patrick R [Pasco, WA; Tratz, Stephen C [Richland, WA; Hohimer, Ryan E [West Richland, WA
2012-03-20
Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture are described according to some aspects. In one aspect, a hypothesis analysis method includes providing a hypothesis, providing an indicator which at least one of supports and refutes the hypothesis, using the indicator, associating evidence with the hypothesis, weighting the association of the evidence with the hypothesis, and using the weighting, providing information regarding the accuracy of the hypothesis.
DEFF Research Database (Denmark)
Mikkelsen, Kaare B.; Kidmose, Preben; Hansen, Lars Kai
2017-01-01
simultaneously recorded scalp EEG. A cross-validation procedure was employed to ensure unbiased estimates. We present several pieces of evidence in support of the keyhole hypothesis: There is a high mutual information between data acquired at scalp electrodes and through the ear-EEG "keyhole," furthermore we......We propose and test the keyhole hypothesis that measurements from low dimensional EEG, such as ear-EEG reflect a broadly distributed set of neural processes. We formulate the keyhole hypothesis in information theoretical terms. The experimental investigation is based on legacy data consisting of 10...
Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba
2017-12-23
Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.
Feldmann, Daniel; Bauer, Christian; Wagner, Claus
2018-03-01
We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.
Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park
Directory of Open Access Journals (Sweden)
Ferreira Sanette L.A.
2016-09-01
Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.
2014-01-01
Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054
Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen
2014-06-21
Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.
International Nuclear Information System (INIS)
Selander, W.N.; Lane, F.E.; Rowat, J.H.
1995-05-01
A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs
Ooi, Seng-Keat
2005-11-01
Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.
Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M
2017-08-01
Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.
Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary
2015-01-01
Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP
Biostatistics series module 2: Overview of hypothesis testing
Directory of Open Access Journals (Sweden)
Avijit Hazra
2016-01-01
Full Text Available Hypothesis testing (or statistical inference is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric and the number of groups or data sets being compared (e.g., two or more than two at a time. The same research question may be explored by more than one type of hypothesis test
Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola
2016-01-01
Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…
The Qualitative Expectations Hypothesis
DEFF Research Database (Denmark)
Frydman, Roman; Johansen, Søren; Rahbek, Anders
2017-01-01
We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...
The Qualitative Expectations Hypothesis
DEFF Research Database (Denmark)
Frydman, Roman; Johansen, Søren; Rahbek, Anders
We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...
Revisiting the Dutch hypothesis
Postma, Dirkje S.; Weiss, Scott T.; van den Berge, Maarten; Kerstjens, Huib A. M.; Koppelman, Gerard H.
The Dutch hypothesis was first articulated in 1961, when many novel and advanced scientific techniques were not available, such as genomics techniques for pinpointing genes, gene expression, lipid and protein profiles, and the microbiome. In addition, computed tomographic scans and advanced analysis
I.P. van Staveren (Irene)
2014-01-01
markdownabstract__Abstract__ This article explores the Lehman Sisters Hypothesis. It reviews empirical literature about gender differences in behavioral, experimental, and neuro-economics as well as in other fields of behavioral research. It discusses gender differences along three dimensions of
Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.
2009-01-01
The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that
Energy Technology Data Exchange (ETDEWEB)
Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-15
These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ^{2} which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H_{0}.
Christensen, Kim; Oomen, Roel; Renò, Roberto
2016-01-01
The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms such as feedback trading. At a theoretical level, we show how to build drift bursts into the...
Whiplash and the compensation hypothesis.
Spearing, Natalie M; Connelly, Luke B
2011-12-01
Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.
Directory of Open Access Journals (Sweden)
Eudaldo Enrique Espinoza Freire
2018-01-01
Full Text Available It is intended with this work to have a material with the fundamental contents, which enable the university professor to formulate the hypothesis, for the development of an investigation, taking into account the problem to be solved. For its elaboration, the search of information in primary documents was carried out, such as thesis of degree and reports of research results, selected on the basis of its relevance with the analyzed subject, current and reliability, secondary documents, as scientific articles published in journals of recognized prestige, the selection was made with the same terms as in the previous documents. It presents a conceptualization of the updated hypothesis, its characterization and an analysis of the structure of the hypothesis in which the determination of the variables is deepened. The involvement of the university professor in the teaching-research process currently faces some difficulties, which are manifested, among other aspects, in an unstable balance between teaching and research, which leads to a separation between them.
Yoon, Sung Hwan
2017-10-12
According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.
Fiedler, Klaus; Kareev, Yaakov
2006-01-01
Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…
Rutkowski, David J.; Prusinski, Ellen L.
2011-01-01
The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…
Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3
Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for
Maekawa, S.; Ankersmit, Bart; Neuhaus, E.; Schellen, H.L.; Beltran, V.; Boersma, F.; Padfield, T.; Borchersen, K.
2007-01-01
Our Lord in the Attic is a historic house museum located in the historic center of Amsterdam, The Netherlands. It is a typical 17th century Dutch canal house, with a hidden Church in the attic. The Church was used regularly until 1887 when the house became a museum. The annual total number of
Rhodes, Jonathan R.; Lunney, Daniel; Callaghan, John; McAlpine, Clive A.
2014-01-01
Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity. PMID:24646891
Directory of Open Access Journals (Sweden)
Jonathan R Rhodes
Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.
Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A
2014-01-01
Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.
Czech Academy of Sciences Publication Activity Database
Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.
2017-01-01
Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016
Directory of Open Access Journals (Sweden)
Shuo Zhang
2017-04-01
Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.
Ágg, Bence; Meienberg, Janine; Kopps, Anna M.; Fattorini, Nathalie; Stengl, Roland; Daradics, Noémi; Pólos, Miklós; Bors, András; Radovits, Tamás; Merkely, Béla; De Backer, Julie; Szabolcs, Zoltán; Mátyás, Gábor
2018-01-01
Copy number variations (CNVs) comprise about 10% of reported disease-causing mutations in Mendelian disorders. Nevertheless, pathogenic CNVs may have been under-detected due to the lack or insufficient use of appropriate detection methods. In this report, on the example of the diagnostic odyssey of a patient with Marfan syndrome (MFS) harboring a hitherto unreported 32-kb FBN1 deletion, we highlight the need for and the feasibility of testing for CNVs (>1 kb) in Mendelian disorders in the current next-generation sequencing (NGS) era. PMID:29850152
Kozitskiy, Sergey
2018-05-01
Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.
2017-11-01
A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.
Directory of Open Access Journals (Sweden)
Annelies CEULEMANS
2014-03-01
Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.
International Nuclear Information System (INIS)
Luner, S.J.
1978-01-01
A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day
How to implement a quantum algorithm on a large number of qubits by controlling one central qubit
Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco
2010-03-01
It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).
International Nuclear Information System (INIS)
Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik
2014-01-01
To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey
Yano, T.; Nishino, K.; Kawamura, H.; Ueno, I.; Matsumoto, S.
2015-02-01
This paper reports the experimental results on the instability and associated roll structures (RSs) of Marangoni convection in liquid bridges formed under the microgravity environment on the International Space Station. The geometry of interest is high aspect ratio (AR = height/diameter ≥ 1.0) liquid bridges of high Prandtl number fluids (Pr = 67 and 207) suspended between coaxial disks heated differentially. The unsteady flow field and associated RSs were revealed with the three-dimensional particle tracking velocimetry. It is found that the flow field after the onset of instability exhibits oscillations with azimuthal mode number m = 1 and associated RSs traveling in the axial direction. The RSs travel in the same direction as the surface flow (co-flow direction) for 1.00 ≤ AR ≤ 1.25 while they travel in the opposite direction (counter-flow direction) for AR ≥ 1.50, thus showing the change of traveling directions with AR. This traveling direction for AR ≥ 1.50 is reversed to the co-flow direction when the temperature difference between the disks is increased to the condition far beyond the critical one. This change of traveling directions is accompanied by the increase of the oscillation frequency. The characteristics of the RSs for AR ≥ 1.50, such as the azimuthal mode of oscillation, the dimensionless oscillation frequency, and the traveling direction, are in reasonable agreement with those of the previous sounding rocket experiment for AR = 2.50 and those of the linear stability analysis of an infinite liquid bridge.
The Stoichiometric Divisome: A Hypothesis
Directory of Open Access Journals (Sweden)
Waldemar eVollmer
2015-05-01
Full Text Available Dividing Escherichia coli cells simultaneously constrict the inner membrane, peptidoglycan layer and outer membrane to synthesize the new poles of the daughter cells. For this, more than 30 proteins localize to mid-cell where they form a large, ring-like assembly, the divisome, facilitating division. Although the precise function of most divisome proteins is unknown, it became apparent in recent years that dynamic protein-protein interactions are essential for divisome assembly and function. However, little is known about the nature of the interactions involved and the stoichiometry of the proteins within the divisome. A recent study (Li et al., 2014 used ribosome profiling to measure the absolute protein synthesis rates in E. coli. Interestingly, they observed that most proteins which participate in known multiprotein complexes are synthesized proportional to their stoichiometry. Based on this principle we present a hypothesis for the stoichiometry of the core of the divisome, taking into account known protein-protein interactions. From this hypothesis we infer a possible mechanism for PG synthesis during division.
A LARGE NUMBER OF z > 6 GALAXIES AROUND A QSO AT z = 6.43: EVIDENCE FOR A PROTOCLUSTER?
International Nuclear Information System (INIS)
Utsumi, Yousuke; Kashikawa, Nobunari; Miyazaki, Satoshi; Komiyama, Yutaka; Goto, Tomotsugu; Furusawa, Hisanori; Overzier, Roderik
2010-01-01
QSOs have been thought to be important for tracing highly biased regions in the early universe, from which the present-day massive galaxies and galaxy clusters formed. While overdensities of star-forming galaxies have been found around QSOs at 2 6 is less clear. Previous studies with the Hubble Space Telescope (HST) have reported the detection of small excesses of faint dropout galaxies in some QSO fields, but these surveys probed a relatively small region surrounding the QSOs. To overcome this problem, we have observed the most distant QSO at z = 6.4 using the large field of view of the Suprime-Cam (34' x 27'). Newly installed red-sensitive fully depleted CCDs allowed us to select Lyman break galaxies (LBGs) at z ∼ 6.4 more efficiently. We found seven LBGs in the QSO field, whereas only one exists in a comparison field. The significance of this apparent excess is difficult to quantify without spectroscopic confirmation and additional control fields. The Poisson probability to find seven objects when one expects four is ∼10%, while the probability to find seven objects in one field and only one in the other is less than 0.4%, suggesting that the QSO field is significantly overdense relative to the control field. These conclusions are supported by a comparison with a cosmological smoothed particle hydrodynamics simulation which includes the higher order clustering of galaxies. We find some evidence that the LBGs are distributed in a ring-like shape centered on the QSO with a radius of ∼3 Mpc. There are no candidate LBGs within 2 Mpc from the QSO, i.e., galaxies are clustered around the QSO but appear to avoid the very center. These results suggest that the QSO is embedded in an overdense region when defined on a sufficiently large scale (i.e., larger than an HST/ACS pointing). This suggests that the QSO was indeed born in a massive halo. The central deficit of galaxies may indicate that (1) the strong UV radiation from the QSO suppressed galaxy formation in
Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.
1996-01-01
This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.
Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng
2017-11-01
This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.
Box-particle probability hypothesis density filtering
Schikora, M.; Gning, A.; Mihaylova, L.; Cremers, D.; Koch, W.
2014-01-01
This paper develops a novel approach for multitarget tracking, called box-particle probability hypothesis density filter (box-PHD filter). The approach is able to track multiple targets and estimates the unknown number of targets. Furthermore, it is capable of dealing with three sources of uncertainty: stochastic, set-theoretic, and data association uncertainty. The box-PHD filter reduces the number of particles significantly, which improves the runtime considerably. The small number of box-p...
International Nuclear Information System (INIS)
McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.
2011-01-01
Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is
Memory in astrocytes: a hypothesis
Directory of Open Access Journals (Sweden)
Caudle Robert M
2006-01-01
Full Text Available Abstract Background Recent work has indicated an increasingly complex role for astrocytes in the central nervous system. Astrocytes are now known to exchange information with neurons at synaptic junctions and to alter the information processing capabilities of the neurons. As an extension of this trend a hypothesis was proposed that astrocytes function to store information. To explore this idea the ion channels in biological membranes were compared to models known as cellular automata. These comparisons were made to test the hypothesis that ion channels in the membranes of astrocytes form a dynamic information storage device. Results Two dimensional cellular automata were found to behave similarly to ion channels in a membrane when they function at the boundary between order and chaos. The length of time information is stored in this class of cellular automata is exponentially related to the number of units. Therefore the length of time biological ion channels store information was plotted versus the estimated number of ion channels in the tissue. This analysis indicates that there is an exponential relationship between memory and the number of ion channels. Extrapolation of this relationship to the estimated number of ion channels in the astrocytes of a human brain indicates that memory can be stored in this system for an entire life span. Interestingly, this information is not affixed to any physical structure, but is stored as an organization of the activity of the ion channels. Further analysis of two dimensional cellular automata also demonstrates that these systems have both associative and temporal memory capabilities. Conclusion It is concluded that astrocytes may serve as a dynamic information sink for neurons. The memory in the astrocytes is stored by organizing the activity of ion channels and is not associated with a physical location such as a synapse. In order for this form of memory to be of significant duration it is necessary
2014-01-01
Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239
International Nuclear Information System (INIS)
Hasegawa, K.; Lim, C.S.; Ogure, K.
2003-01-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-09-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Hasegawa, K.; Lim, C. S.; Ogure, K.
2003-01-01
We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.
Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.
Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael
2007-09-07
Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.
Directory of Open Access Journals (Sweden)
Guilherme Mourão
2010-10-01
Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.
Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.
2014-01-01
The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At
Meng, Xuhui; Guo, Zhaoli
2015-10-01
A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.
Directory of Open Access Journals (Sweden)
TRIFINA, L.
2011-02-01
Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.
Directory of Open Access Journals (Sweden)
Débora Jardim-Messeder
2017-12-01
Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal
International Nuclear Information System (INIS)
Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.
2004-01-01
The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms
Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al
2013-12-11
A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.
Energy Technology Data Exchange (ETDEWEB)
Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)
2015-06-15
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.
International Nuclear Information System (INIS)
Chiu, J; Ma, L
2015-01-01
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume
Directory of Open Access Journals (Sweden)
Varala Kranthi
2007-05-01
Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.
Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.
2017-05-01
We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.
Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.
Directory of Open Access Journals (Sweden)
Mohsen Champour
2015-09-01
Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284
Taylor, S. R.
1984-01-01
The concept that the Moon was fissioned from the Earth after core separation is the most readily testable hypothesis of lunar origin, since direct comparisons of lunar and terrestrial compositions can be made. Differences found in such comparisons introduce so many ad hoc adjustments to the fission hypothesis that it becomes untestable. Further constraints may be obtained from attempting to date the volatile-refractory element fractionation. The combination of chemical and isotopic problems suggests that the fission hypothesis is no longer viable, and separate terrestrial and lunar accretion from a population of fractionated precursor planetesimals provides a more reasonable explanation.
Makki, Behrooz
2016-03-22
This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.
Evaluating the Stage Learning Hypothesis.
Thomas, Hoben
1980-01-01
A procedure for evaluating the Genevan stage learning hypothesis is illustrated by analyzing Inhelder, Sinclair, and Bovet's guided learning experiments (in "Learning and the Development of Cognition." Cambridge: Harvard University Press, 1974). (Author/MP)
The Purchasing Power Parity Hypothesis:
African Journals Online (AJOL)
2011-10-02
Oct 2, 2011 ... reject the unit root hypothesis in real exchange rates may simply be due to the shortness ..... Violations of Purchasing Power Parity and Their Implications for Efficient ... Official Intervention in the Foreign Exchange Market:.
Directory of Open Access Journals (Sweden)
Karsten Laursen
Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.
Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.
2014-01-01
The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall
Directory of Open Access Journals (Sweden)
Cigudosa Juan C
2011-05-01
Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.
Directory of Open Access Journals (Sweden)
Chuanchuan Xie
2017-01-01
Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.
Directory of Open Access Journals (Sweden)
Anna Twardosz
2011-04-01
Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result
The atomic hypothesis: physical consequences
International Nuclear Information System (INIS)
Rivas, Martin
2008-01-01
The hypothesis that matter is made of some ultimate and indivisible objects, together with the restricted relativity principle, establishes a constraint on the kind of variables we are allowed to use for the variational description of elementary particles. We consider that the atomic hypothesis not only states the indivisibility of elementary particles, but also that these ultimate objects, if not annihilated, cannot be modified by any interaction so that all allowed states of an elementary particle are only kinematical modifications of any one of them. Therefore, an elementary particle cannot have excited states. In this way, the kinematical group of spacetime symmetries not only defines the symmetries of the system, but also the variables in terms of which the mathematical description of the elementary particles can be expressed in either the classical or the quantum mechanical description. When considering the interaction of two Dirac particles, the atomic hypothesis restricts the interaction Lagrangian to a kind of minimal coupling interaction
Multiple sclerosis: a geographical hypothesis.
Carlyle, I P
1997-12-01
Multiple sclerosis remains a rare neurological disease of unknown aetiology, with a unique distribution, both geographically and historically. Rare in equatorial regions, it becomes increasingly common in higher latitudes; historically, it was first clinically recognized in the early nineteenth century. A hypothesis, based on geographical reasoning, is here proposed: that the disease is the result of a specific vitamin deficiency. Different individuals suffer the deficiency in separate and often unique ways. Evidence to support the hypothesis exists in cultural considerations, in the global distribution of the disease, and in its historical prevalence.
Discussion of the Porter hypothesis
International Nuclear Information System (INIS)
1999-11-01
In the reaction to the long-range vision of RMNO, published in 1996, The Dutch government posed the question whether a far-going and progressive modernization policy will lead to competitive advantages of high-quality products on partly new markets. Such a question is connected to the so-called Porter hypothesis: 'By stimulating innovation, strict environmental regulations can actually enhance competitiveness', from which statement it can be concluded that environment and economy can work together quite well. A literature study has been carried out in order to determine under which conditions that hypothesis is endorsed in the scientific literature and policy documents. Recommendations are given for further studies. refs
The thrifty phenotype hypothesis revisited
DEFF Research Database (Denmark)
Vaag, A A; Grunnet, L G; Arora, G P
2012-01-01
Twenty years ago, Hales and Barker along with their co-workers published some of their pioneering papers proposing the 'thrifty phenotype hypothesis' in Diabetologia (4;35:595-601 and 3;36:62-67). Their postulate that fetal programming could represent an important player in the origin of type 2...... of the underlying molecular mechanisms. Type 2 diabetes is a multiple-organ disease, and developmental programming, with its idea of organ plasticity, is a plausible hypothesis for a common basis for the widespread organ dysfunctions in type 2 diabetes and the metabolic syndrome. Only two among the 45 known type 2...
Reverse hypothesis machine learning a practitioner's perspective
Kulkarni, Parag
2017-01-01
This book introduces a paradigm of reverse hypothesis machines (RHM), focusing on knowledge innovation and machine learning. Knowledge- acquisition -based learning is constrained by large volumes of data and is time consuming. Hence Knowledge innovation based learning is the need of time. Since under-learning results in cognitive inabilities and over-learning compromises freedom, there is need for optimal machine learning. All existing learning techniques rely on mapping input and output and establishing mathematical relationships between them. Though methods change the paradigm remains the same—the forward hypothesis machine paradigm, which tries to minimize uncertainty. The RHM, on the other hand, makes use of uncertainty for creative learning. The approach uses limited data to help identify new and surprising solutions. It focuses on improving learnability, unlike traditional approaches, which focus on accuracy. The book is useful as a reference book for machine learning researchers and professionals as ...
Whitaker, Katherine E.; van Dokkum, Pieter G.; Brammer, Gabriel; Momcheva, Ivelina G.; Skelton, Rosalind; Franx, Marijn; Kriek, Mariska; Labbé, Ivo; Fumagalli, Mattia; Lundgren, Britt F.; Nelson, Erica J.; Patel, Shannon G.; Rix, Hans-Walter
2013-06-01
Quiescent galaxies at z ~ 2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically quiescent galaxies at 1.4 < z < 2.2 from the 3D-HST grism survey. In addition to Hβ (λ4861 Å), we unambiguously identify metal absorption lines in the stacked spectrum, including the G band (λ4304 Å), Mg I (λ5175 Å), and Na I (λ5894 Å). This finding demonstrates that galaxies with relatively old stellar populations already existed when the universe was ~3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3^{+0.1}_{-0.3} Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: the reddest 80% of galaxies are dominated by metal lines and have a relatively old mean age of 1.6^{+0.5}_{-0.4} Gyr, whereas the bluest (and brightest) galaxies have strong Balmer lines and a spectroscopic age of 0.9^{+0.2}_{-0.1} Gyr. Although the spectrum is dominated by an evolved stellar population, we also find [O III] and Hβ emission. Interestingly, this emission is more centrally concentrated than the continuum with {L_{{O}\\,\\scriptsize{III}}}=1.7+/- 0.3\\times 10^{40} erg s-1, indicating residual central star formation or nuclear activity.
Lind, Mads V; Savolainen, Otto I; Ross, Alastair B
2016-08-01
Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.
Stephan, Carl N
2014-03-01
By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.
Questioning the social intelligence hypothesis.
Holekamp, Kay E
2007-02-01
The social intelligence hypothesis posits that complex cognition and enlarged "executive brains" evolved in response to challenges that are associated with social complexity. This hypothesis has been well supported, but some recent data are inconsistent with its predictions. It is becoming increasingly clear that multiple selective agents, and non-selective constraints, must have acted to shape cognitive abilities in humans and other animals. The task now is to develop a larger theoretical framework that takes into account both inter-specific differences and similarities in cognition. This new framework should facilitate consideration of how selection pressures that are associated with sociality interact with those that are imposed by non-social forms of environmental complexity, and how both types of functional demands interact with phylogenetic and developmental constraints.
Interstellar colonization and the zoo hypothesis
International Nuclear Information System (INIS)
Jones, E.M.
1978-01-01
Michael Hart and others have pointed out that current estimates of the number of technological civilizations arisen in the Galaxy since its formation is in fundamental conflict with the expectation that such a civilization could colonize and utilize the entire Galaxy in 10 to 20 million years. This dilemma can be called Hart's paradox. Resolution of the paradox requires that one or more of the following are true: we are the Galaxy's first technical civilization; interstellar travel is immensely impractical or simply impossible; technological civilizations are very short-lived; or we inhabit a wildnerness preserve. The latter is the zoo hypothesis
Directory of Open Access Journals (Sweden)
Prokop Pavol
2016-06-01
Full Text Available Rape is a recurrent adaptive problem of female humans and females of a number of non-human animals. Rape has various physiological and reproductive costs to the victim. The costs of rape are furthermore exaggerated by social rejection and blaming of a victim, particularly by men. The negative perception of raped women by men has received little attention from an evolutionary perspective. Across two independent studies, we investigated whether the risk of sexually transmitted diseases (the STD hypothesis, Hypothesis 1 or paternity uncertainty (the cuckoldry hypothesis, Hypothesis 2 influence the negative perception of raped women by men. Raped women received lower attractiveness score than non-raped women, especially in long-term mate attractiveness score. The perceived attractiveness of raped women was not influenced by the presence of experimentally manipulated STD cues on faces of putative rapists. Women raped by three men received lower attractiveness score than women raped by one man. These results provide stronger support for the cuckoldry hypothesis (Hypothesis 2 than for the STD hypothesis (Hypothesis 1. Single men perceived raped women as more attractive than men in a committed relationship (Hypothesis 3, suggesting that the mating opportunities mediate men’s perception of victims of rape. Overall, our results suggest that the risk of cuckoldry underlie the negative perception of victims of rape by men rather than the fear of disease transmission.
Directory of Open Access Journals (Sweden)
Nori Matsunami
Full Text Available Structural variation is thought to play a major etiological role in the development of autism spectrum disorders (ASDs, and numerous studies documenting the relevance of copy number variants (CNVs in ASD have been published since 2006. To determine if large ASD families harbor high-impact CNVs that may have broader impact in the general ASD population, we used the Affymetrix genome-wide human SNP array 6.0 to identify 153 putative autism-specific CNVs present in 55 individuals with ASD from 9 multiplex ASD pedigrees. To evaluate the actual prevalence of these CNVs as well as 185 CNVs reportedly associated with ASD from published studies many of which are insufficiently powered, we designed a custom Illumina array and used it to interrogate these CNVs in 3,000 ASD cases and 6,000 controls. Additional single nucleotide variants (SNVs on the array identified 25 CNVs that we did not detect in our family studies at the standard SNP array resolution. After molecular validation, our results demonstrated that 15 CNVs identified in high-risk ASD families also were found in two or more ASD cases with odds ratios greater than 2.0, strengthening their support as ASD risk variants. In addition, of the 25 CNVs identified using SNV probes on our custom array, 9 also had odds ratios greater than 2.0, suggesting that these CNVs also are ASD risk variants. Eighteen of the validated CNVs have not been reported previously in individuals with ASD and three have only been observed once. Finally, we confirmed the association of 31 of 185 published ASD-associated CNVs in our dataset with odds ratios greater than 2.0, suggesting they may be of clinical relevance in the evaluation of children with ASDs. Taken together, these data provide strong support for the existence and application of high-impact CNVs in the clinical genetic evaluation of children with ASD.
Testing the stress shadow hypothesis
Felzer, Karen R.; Brodsky, Emily E.
2005-05-01
A fundamental question in earthquake physics is whether aftershocks are predominantly triggered by static stress changes (permanent stress changes associated with fault displacement) or dynamic stresses (temporary stress changes associated with earthquake shaking). Both classes of models provide plausible explanations for earthquake triggering of aftershocks, but only the static stress model predicts stress shadows, or regions in which activity is decreased by a nearby earthquake. To test for whether a main shock has produced a stress shadow, we calculate time ratios, defined as the ratio of the time between the main shock and the first earthquake to follow it and the time between the last earthquake to precede the main shock and the first earthquake to follow it. A single value of the time ratio is calculated for each 10 × 10 km bin within 1.5 fault lengths of the main shock epicenter. Large values of the time ratio indicate a long wait for the first earthquake to follow the main shock and thus a potential stress shadow, whereas small values indicate the presence of aftershocks. Simulations indicate that the time ratio test should have sufficient sensitivity to detect stress shadows if they are produced in accordance with the rate and state friction model. We evaluate the 1989 MW 7.0 Loma Prieta, 1992 MW 7.3 Landers, 1994 MW 6.7 Northridge, and 1999 MW 7.1 Hector Mine main shocks. For each main shock, there is a pronounced concentration of small time ratios, indicating the presence of aftershocks, but the number of large time ratios is less than at other times in the catalog. This suggests that stress shadows are not present. By comparing our results to simulations we estimate that we can be at least 98% confident that the Loma Prieta and Landers main shocks did not produce stress shadows and 91% and 84% confident that stress shadows were not generated by the Hector Mine and Northridge main shocks, respectively. We also investigate the long hypothesized existence
A Molecular–Structure Hypothesis
Directory of Open Access Journals (Sweden)
Jan C. A. Boeyens
2010-11-01
Full Text Available The self-similar symmetry that occurs between atomic nuclei, biological growth structures, the solar system, globular clusters and spiral galaxies suggests that a similar pattern should characterize atomic and molecular structures. This possibility is explored in terms of the current molecular structure-hypothesis and its extension into four-dimensional space-time. It is concluded that a quantum molecule only has structure in four dimensions and that classical (Newtonian structure, which occurs in three dimensions, cannot be simulated by quantum-chemical computation.
Antiaging therapy: a prospective hypothesis
Directory of Open Access Journals (Sweden)
Shahidi Bonjar MR
2015-01-01
Full Text Available Mohammad Rashid Shahidi Bonjar,1 Leyla Shahidi Bonjar2 1School of Dentistry, Kerman University of Medical Sciences, Kerman Iran; 2Department of Pharmacology, College of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran Abstract: This hypothesis proposes a new prospective approach to slow the aging process in older humans. The hypothesis could lead to developing new treatments for age-related illnesses and help humans to live longer. This hypothesis has no previous documentation in scientific media and has no protocol. Scientists have presented evidence that systemic aging is influenced by peculiar molecules in the blood. Researchers at Albert Einstein College of Medicine, New York, and Harvard University in Cambridge discovered elevated titer of aging-related molecules (ARMs in blood, which trigger cascade of aging process in mice; they also indicated that the process can be reduced or even reversed. By inhibiting the production of ARMs, they could reduce age-related cognitive and physical declines. The present hypothesis offers a new approach to translate these findings into medical treatment: extracorporeal adjustment of ARMs would lead to slower rates of aging. A prospective “antiaging blood filtration column” (AABFC is a nanotechnological device that would fulfill the central role in this approach. An AABFC would set a near-youth homeostatic titer of ARMs in the blood. In this regard, the AABFC immobilizes ARMs from the blood while blood passes through the column. The AABFC harbors antibodies against ARMs. ARM antibodies would be conjugated irreversibly to ARMs on contact surfaces of the reaction platforms inside the AABFC till near-youth homeostasis is attained. The treatment is performed with the aid of a blood-circulating pump. Similar to a renal dialysis machine, blood would circulate from the body to the AABFC and from there back to the body in a closed circuit until ARMs were sufficiently depleted from the blood. The
International Nuclear Information System (INIS)
Hey, J D
2014-01-01
As a sequel to an earlier study (Hey 2009 J. Phys. B: At. Mol. Opt. Phys. 42 125701), we consider further the application of the line strength formula derived by Watson (2006 J. Phys. B: At. Mol. Opt. Phys. 39 L291) to transitions arising from states of very high principal quantum number in hydrogenic atoms and ions (Rydberg–Rydberg transitions, n > 1000). It is shown how apparent difficulties associated with the use of recurrence relations, derived (Hey 2006 J. Phys. B: At. Mol. Opt. Phys. 39 2641) by the ladder operator technique of Infeld and Hull (1951 Rev. Mod. Phys. 23 21), may be eliminated by a very simple numerical device, whereby this method may readily be applied up to n ≈ 10 000. Beyond this range, programming of the method may entail greater care and complexity. The use of the numerically efficient McLean–Watson formula for such cases is again illustrated by the determination of radiative lifetimes and comparison of present results with those from an asymptotic formula. The question of the influence on the results of the omission or inclusion of fine structure is considered by comparison with calculations based on the standard Condon–Shortley line strength formula. Interest in this work on the radial matrix elements for large n and n′ is related to measurements of radio recombination lines from tenuous space plasmas, e.g. Stepkin et al (2007 Mon. Not. R. Astron. Soc. 374 852), Bell et al (2011 Astrophys. Space Sci. 333 377), to the calculation of electron impact broadening parameters for such spectra (Watson 2006 J. Phys. B: At. Mol. Opt. Phys. 39 1889) and comparison with other theoretical methods (Peach 2014 Adv. Space Res. in press), to the modelling of physical processes in H II regions (Roshi et al 2012 Astrophys. J. 749 49), and the evaluation bound–bound transitions from states of high n during primordial cosmological recombination (Grin and Hirata 2010 Phys. Rev. D 81 083005, Ali-Haïmoud and Hirata 2010 Phys. Rev. D 82 063521
The Fractal Market Hypothesis: Applications to Financial Forecasting
Blackledge, Jonathan
2010-01-01
Most financial modelling systems rely on an underlying hypothesis known as the Efficient Market Hypothesis (EMH) including the famous Black-Scholes formula for placing an option. However, the EMH has a fundamental flaw: it is based on the assumption that economic processes are normally distributed and it has long been known that this is not the case. This fundamental assumption leads to a number of shortcomings associated with using the EMH to analyse financial data which includes failure to ...
Is PMI the Hypothesis or the Null Hypothesis?
Tarone, Aaron M; Sanford, Michelle R
2017-09-01
Over the past several decades, there have been several strident exchanges regarding whether forensic entomologists estimate the postmortem interval (PMI), minimum PMI, or something else. During that time, there has been a proliferation of terminology reflecting this concern regarding "what we do." This has been a frustrating conversation for some in the community because much of this debate appears to be centered on what assumptions are acknowledged directly and which are embedded within a list of assumptions (or ignored altogether) in the literature and in case reports. An additional component of the conversation centers on a concern that moving away from the use of certain terminology like PMI acknowledges limitations and problems that would make the application of entomology appear less useful in court-a problem for lawyers, but one that should not be problematic for scientists in the forensic entomology community, as uncertainty is part of science that should and can be presented effectively in the courtroom (e.g., population genetic concepts in forensics). Unfortunately, a consequence of the way this conversation is conducted is that even as all involved in the debate acknowledge the concerns of their colleagues, parties continue to talk past one another advocating their preferred terminology. Progress will not be made until the community recognizes that all of the terms under consideration take the form of null hypothesis statements and that thinking about "what we do" as a null hypothesis has useful legal and scientific ramifications that transcend arguments over the usage of preferred terminology. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
J. Vogt (Julia); K. Bengesser (Kathrin); K.B.M. Claes (Kathleen B.M.); K. Wimmer (Katharina); V.-F. Mautner (Victor-Felix); R. van Minkelen (Rick); E. Legius (Eric); H. Brems (Hilde); M. Upadhyaya (Meena); J. Högel (Josef); C. Lazaro (Conxi); T. Rosenbaum (Thorsten); S. Bammert (Simone); L. Messiaen (Ludwine); D.N. Cooper (David); H. Kehrer-Sawatzki (Hildegard)
2014-01-01
textabstractBackground: Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The
The "Discouraged-Business-Major" Hypothesis: Policy Implications
Marangos, John
2012-01-01
This paper uses a relatively large dataset of the stated academic major preferences of economics majors at a relatively large, not highly selective, public university in the USA to identify the "discouraged-business-majors" (DBMs). The DBM hypothesis addresses the phenomenon where students who are screened out of the business curriculum often…
Weiss, Stephan; Wei, Ping; Ahlers, Guenter
2015-11-01
Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.
Sanacora, Gerard; Treccani, Giulia; Popoli, Maurizio
2012-01-01
Half a century after the first formulation of the monoamine hypothesis, compelling evidence implies that long-term changes in an array of brain areas and circuits mediating complex cognitive-emotional behaviors represent the biological underpinnings of mood/anxiety disorders. A large number of clinical studies suggest that pathophysiology is associated with dysfunction of the predominant glutamatergic system, malfunction in the mechanisms regulating clearance and metabolism of glutamate, and cytoarchitectural/morphological maladaptive changes in a number of brain areas mediating cognitive-emotional behaviors. Concurrently, a wealth of data from animal models have shown that different types of environmental stress enhance glutamate release/transmission in limbic/cortical areas and exert powerful structural effects, inducing dendritic remodeling, reduction of synapses and possibly volumetric reductions resembling those observed in depressed patients. Because a vast majority of neurons and synapses in these areas and circuits use glutamate as neurotransmitter, it would be limiting to maintain that glutamate is in some way 'involved' in mood/anxiety disorders; rather it should be recognized that the glutamatergic system is a primary mediator of psychiatric pathology and, potentially, also a final common pathway for the therapeutic action of antidepressant agents. A paradigm shift from a monoamine hypothesis of depression to a neuroplasticity hypothesis focused on glutamate may represent a substantial advancement in the working hypothesis that drives research for new drugs and therapies. Importantly, despite the availability of multiple classes of drugs with monoamine-based mechanisms of action, there remains a large percentage of patients who fail to achieve a sustained remission of depressive symptoms. The unmet need for improved pharmacotherapies for treatment-resistant depression means there is a large space for the development of new compounds with novel mechanisms
Urbanization and the more-individuals hypothesis.
Chiari, Claudia; Dinetti, Marco; Licciardello, Cinzia; Licitra, Gaetano; Pautasso, Marco
2010-03-01
1. Urbanization is a landscape process affecting biodiversity world-wide. Despite many urban-rural studies of bird assemblages, it is still unclear whether more species-rich communities have more individuals, regardless of the level of urbanization. The more-individuals hypothesis assumes that species-rich communities have larger populations, thus reducing the chance of local extinctions. 2. Using newly collated avian distribution data for 1 km(2) grid cells across Florence, Italy, we show a significantly positive relationship between species richness and assemblage abundance for the whole urban area. This richness-abundance relationship persists for the 1 km(2) grid cells with less than 50% of urbanized territory, as well as for the remaining grid cells, with no significant difference in the slope of the relationship. These results support the more-individuals hypothesis as an explanation of patterns in species richness, also in human modified and fragmented habitats. 3. However, the intercept of the species richness-abundance relationship is significantly lower for highly urbanized grid cells. Our study confirms that urban communities have lower species richness but counters the common notion that assemblages in densely urbanized ecosystems have more individuals. In Florence, highly inhabited areas show fewer species and lower assemblage abundance. 4. Urbanized ecosystems are an ongoing large-scale natural experiment which can be used to test ecological theories empirically.
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
International Nuclear Information System (INIS)
Thomas, R.E.
1986-04-01
The purpose of this report is to apply methods of statistical hypothesis testing to demonstrate the performance of containers of radioactive waste. The approach involves modeling the failure times of waste containers using Weibull distributions, making strong assumptions about the parameters. A specific objective is to apply methods of statistical hypothesis testing to determine the number of container tests that must be performed in order to control the probability of arriving at the wrong conclusions. An algorithm to determine the required number of containers to be tested with the acceptable number of failures is derived as a function of the distribution parameters, stated probabilities, and the desired waste containment life. Using a set of reference values for the input parameters, sample sizes of containers to be tested are calculated for demonstration purposes. These sample sizes are found to be excessively large, indicating that this hypothesis-testing framework does not provide a feasible approach for demonstrating satisfactory performance of waste packages for exceptionally long time periods
International Nuclear Information System (INIS)
Puntambekar, A.M.; Karmarkar, M.G.
2003-01-01
Superconducting (Sc) spool correctors of different types namely Sextupole, (MCS) Decapole (MCD) and Octupole (MCO) are incorporated in each of the main dipole of Large Hadron Collider (LHC). In all 2464 MCS and 1232 MCDO magnets are required to equip all 1232 Dipoles of LHC. The coils wound from thin rectangular section Sc wires are the heart of magnet assembly and its performance for the field quality and cold quench training largely depends on the precise and robust construction of these coils. Under DAE-CERN collaboration CAT was entrusted with the responsibility of making these magnets for LHC. Starting with development of manual fixtures and prototyping using soldering, a more advances special Automatic Coils Winding and Ultrasonic Welding (USW) system for production of large no. of coils and magnets were built at CAT. The paper briefly describes the various developments in this area. (author)
Energy Technology Data Exchange (ETDEWEB)
Cooperstein, G; Mosher, D; Stephanakis, S J; Weber, B V; Young, F C [Naval Research Laboratory, Washington, DC (United States); Swanekamp, S B [JAYCOR, Vienna, VA (United States)
1997-12-31
Backscattered electrons from anodes with high-atomic-number substrates cause early-time anode-plasma formation from the surface layer leading to faster, more intense electron beam pinching, and lower diode impedance. A simple derivation of Child-Langmuir current from a thin hollow cathode shows the same dependence on the diode aspect ratio as critical current. Using this fact, it is shown that the diode voltage and current follow relativistic Child-Langmuir theory until the anode plasma is formed, and then follows critical current after the beam pinches. With thin hollow cathodes, electron beam pinching can be suppressed at low voltages (< 800 kV) even for high currents and high-atomic-number anodes. Electron beam pinching can also be suppressed at high voltages for low-atomic-number anodes as long as the electron current densities remain below the plasma turn-on threshold. (author). 8 figs., 2 refs.
DEFF Research Database (Denmark)
Hansen, S K; Gjesing, A P; Rasmussen, S K
2004-01-01
The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose-induced ins......The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose...
Muon capture on Ni isotopes, projected QRPA, and CVC hypothesis
International Nuclear Information System (INIS)
Samana, Arturo R.; Sande, Danilo; Krmpotic, Francisco; Universidad Nacional de La Plata
2011-01-01
In recent years we have developed a novel formalism for the weak interaction processes, obtaining new expressions for the transition rates, which greatly facilitate numerical calculations, for both neutrino-nucleus reactions and muon capture, allowing us to use very large configuration spaces and to evaluate the quasielastic 12 C (ν, μ - ) 12 N cross section at energies of the order of 1 GeV, which are measured in the MiniBooNE experiment. Our formulation includes for the first time the consequences of the explicit violation of the conserved vector current (CVC) hypothesis by the Coulomb field. We have also shown that the particle number projection procedure within the quasiparticle random phase approximation (QRPA) is important in describing the exclusive (ground-state) properties of 12 B and 12 N as well as the muon capture rate and the neutrino nucleus cross section in 56 Fe. In this work, we analyze in a quantitative way the consequences of the CVC violation on the muon capture rates in Ni isotopes (for which are available the experimental data) using both the standard QRPA and the projected QRPA (PQRPA). The last one is the only RPA model that treats the Pauli Principle correctly, and we demonstrate that the number projection procedure is important not only for light nuclei but also for medium heavy ones that were studied here. (author)
Walkmeyer, John
Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…
DEFF Research Database (Denmark)
El-Galaly, Tarec Christoffer; Villa, Diego; Michaelsen, Thomas Yssing
2017-01-01
Purpose Development of secondary central nervous system involvement (SCNS) in patients with diffuse large B-cell lymphoma is associated with poor outcomes. The CNS International Prognostic Index (CNS-IPI) has been proposed for identifying patients at greatest risk, but the optimal model is unknow...
Kiers, Henk A.L.; Marchetti, G.M.
1994-01-01
Recently, a number of methods have been proposed for the exploratory analysis of mixtures of qualitative and quantitative variables. In these methods for each variable an object by object similarity matrix is constructed, and these are consequently analyzed by means of three-way methods like
Maeda, Naohiro; Narukawa, Masataka; Ishimaru, Yoshiro; Yamamoto, Kurumi; Misaka, Takumi; Abe, Keiko
2017-05-01
The connections between taste receptor cells (TRCs) and innervating gustatory neurons are formed in a mutually dependent manner during development. To investigate whether a change in the ratio of cell types that compose taste buds influences the number of innervating gustatory neurons, we analyzed the proportion of gustatory neurons that transmit sour taste signals in adult Skn-1a -/- mice in which the number of sour TRCs is greatly increased. We generated polycystic kidney disease 1 like 3-wheat germ agglutinin (pkd1l3-WGA)/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice by crossing Skn-1a -/- mice and pkd1l3-WGA transgenic mice, in which neural pathways of sour taste signals can be visualized. The number of WGA-positive cells in the circumvallate papillae is 3-fold higher in taste buds of pkd1l3-WGA/Skn-1a -/- mice relative to pkd1l3-WGA/Skn-1a +/+ mice. Intriguingly, the ratio of WGA-positive neurons to P2X 2 -expressing gustatory neurons in nodose/petrosal ganglia was similar between pkd1l3-WGA/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice. In conclusion, an alteration in the ratio of cell types that compose taste buds does not influence the number of gustatory neurons that transmit sour taste signals. Copyright © 2017. Published by Elsevier B.V.
Tissue misrepair hypothesis for radiation carcinogenesis
International Nuclear Information System (INIS)
Kondo, Sohei
1991-01-01
Dose-response curves for chronic leukemia in A-bomb survivors and liver tumors in patients given Thorotrast (colloidal thorium dioxide) show large threshold effects. The existence of these threshold effects can be explained by the following hypothesis. A high dose of radiation causes a persistent wound in a cellrenewable tissue. Disorder of the injured cell society partly frees the component cells from territorial restraints on their proliferation, enabling them to continue development of their cellular functions toward advanced autonomy. This progression might be achieved by continued epigenetic and genetic changes as a result of occasional errors in the otherwise concerted healing action of various endogeneous factors recruited for tissue repair. Carcinogenesis is not simply a single-cell problem but a cell-society problem. Therefore, it is not warranted to estimate risk at low doses by linear extrapolation from cancer data at high doses without knowledge of the mechanism of radiation carcinogenesis. (author) 57 refs
Extra dimensions hypothesis in high energy physics
Directory of Open Access Journals (Sweden)
Volobuev Igor
2017-01-01
Full Text Available We discuss the history of the extra dimensions hypothesis and the physics and phenomenology of models with large extra dimensions with an emphasis on the Randall- Sundrum (RS model with two branes. We argue that the Standard Model extension based on the RS model with two branes is phenomenologically acceptable only if the inter-brane distance is stabilized. Within such an extension of the Standard Model, we study the influence of the infinite Kaluza-Klein (KK towers of the bulk fields on collider processes. In particular, we discuss the modification of the scalar sector of the theory, the Higgs-radion mixing due to the coupling of the Higgs boson to the radion and its KK tower, and the experimental restrictions on the mass of the radion-dominated states.
Alternatives to the linear risk hypothesis
International Nuclear Information System (INIS)
Craig, A.G.
1976-01-01
A theoretical argument is presented which suggests that in using the linear hypothesis for all values of LET the low dose risk is overestimated for low LET but that it is underestimated for very high LET. The argument is based upon the idea that cell lesions which do not lead to cell death may in fact lead to a malignant cell. Expressions for the Surviving Fraction and the Cancer Risk based on this argument are given. An advantage of this very general approach is that is expresses cell survival and cancer risk entirely in terms of the cell lesions and avoids the rather contentious argument as to how the average number of lesions should be related to the dose. (U.K.)
Multiple model cardinalized probability hypothesis density filter
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
The venom optimization hypothesis revisited.
Morgenstern, David; King, Glenn F
2013-03-01
Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum. Copyright © 2012 Elsevier Ltd. All rights reserved.
Alien abduction: a medical hypothesis.
Forrest, David V
2008-01-01
In response to a new psychological study of persons who believe they have been abducted by space aliens that found that sleep paralysis, a history of being hypnotized, and preoccupation with the paranormal and extraterrestrial were predisposing experiences, I noted that many of the frequently reported particulars of the abduction experience bear more than a passing resemblance to medical-surgical procedures and propose that experience with these may also be contributory. There is the altered state of consciousness, uniformly colored figures with prominent eyes, in a high-tech room under a round bright saucerlike object; there is nakedness, pain and a loss of control while the body's boundaries are being probed; and yet the figures are thought benevolent. No medical-surgical history was apparently taken in the above mentioned study, but psychological laboratory work evaluated false memory formation. I discuss problems in assessing intraoperative awareness and ways in which the medical hypothesis could be elaborated and tested. If physicians are causing this syndrome in a percentage of patients, we should know about it; and persons who feel they have been abducted should be encouraged to inform their surgeons and anesthesiologists without challenging their beliefs.
The oxidative hypothesis of senescence
Directory of Open Access Journals (Sweden)
Gilca M
2007-01-01
Full Text Available The oxidative hypothesis of senescence, since its origin in 1956, has garnered significant evidence and growing support among scientists for the notion that free radicals play an important role in ageing, either as "damaging" molecules or as signaling molecules. Age-increasing oxidative injuries induced by free radicals, higher susceptibility to oxidative stress in short-lived organisms, genetic manipulations that alter both oxidative resistance and longevity and the anti-ageing effect of caloric restriction and intermittent fasting are a few examples of accepted scientific facts that support the oxidative theory of senescence. Though not completely understood due to the complex "network" of redox regulatory systems, the implication of oxidative stress in the ageing process is now well documented. Moreover, it is compatible with other current ageing theories (e.g., those implicating the mitochondrial damage/mitochondrial-lysosomal axis, stress-induced premature senescence, biological "garbage" accumulation, etc. This review is intended to summarize and critically discuss the redox mechanisms involved during the ageing process: sources of oxidant agents in ageing (mitochondrial -electron transport chain, nitric oxide synthase reaction- and non-mitochondrial- Fenton reaction, microsomal cytochrome P450 enzymes, peroxisomal β -oxidation and respiratory burst of phagocytic cells, antioxidant changes in ageing (enzymatic- superoxide dismutase, glutathione-reductase, glutathion peroxidase, catalase- and non-enzymatic glutathione, ascorbate, urate, bilirubine, melatonin, tocopherols, carotenoids, ubiquinol, alteration of oxidative damage repairing mechanisms and the role of free radicals as signaling molecules in ageing.
DEFF Research Database (Denmark)
Madarasz, Wendy; Manzardo, Ann; Mortensen, Erik Lykke
2012-01-01
Central Psychiatric Research Registry for 8109 birth cohort members aged 45 years. Lifetime psychiatric diagnoses (International Classification of Diseases, Revision 10, group F codes, Mental and Behavioural Disorders, and one Z code) for identified subjects were organized into 14 mutually exclusive......Objective: Psychiatric comorbidities are common among psychiatric patients and typically associated with poorer clinical prognoses. Subjects of a large Danish birth cohort were used to study the relation between mortality and co-occurring psychiatric diagnoses. Method: We searched the Danish...
Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko
In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.
A reformulation of the hygiene hypothesis
DEFF Research Database (Denmark)
Hersoug, Lars-Georg
2006-01-01
Epidemiological studies have shown an inverse relationship between allergic respiratory diseases and the number of siblings. It was hypothesized that the lower prevalence of allergic respiratory diseases in large sibships was due to cross-infections between siblings. According to this hygiene...
Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.
2017-09-01
In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.
The Younger Dryas impact hypothesis: A requiem
Pinter, Nicholas; Scott, Andrew C.; Daulton, Tyrone L.; Podoll, Andrew; Koeberl, Christian; Anderson, R. Scott; Ishman, Scott E.
2011-06-01
The Younger Dryas (YD) impact hypothesis is a recent theory that suggests that a cometary or meteoritic body or bodies hit and/or exploded over North America 12,900 years ago, causing the YD climate episode, extinction of Pleistocene megafauna, demise of the Clovis archeological culture, and a range of other effects. Since gaining widespread attention in 2007, substantial research has focused on testing the 12 main signatures presented as evidence of a catastrophic extraterrestrial event 12,900 years ago. Here we present a review of the impact hypothesis, including its evolution and current variants, and of efforts to test and corroborate the hypothesis. The physical evidence interpreted as signatures of an impact event can be separated into two groups. The first group consists of evidence that has been largely rejected by the scientific community and is no longer in widespread discussion, including: particle tracks in archeological chert; magnetic nodules in Pleistocene bones; impact origin of the Carolina Bays; and elevated concentrations of radioactivity, iridium, and fullerenes enriched in 3He. The second group consists of evidence that has been active in recent research and discussions: carbon spheres and elongates, magnetic grains and magnetic spherules, byproducts of catastrophic wildfire, and nanodiamonds. Over time, however, these signatures have also seen contrary evidence rather than support. Recent studies have shown that carbon spheres and elongates do not represent extraterrestrial carbon nor impact-induced megafires, but are indistinguishable from fungal sclerotia and arthropod fecal material that are a small but common component of many terrestrial deposits. Magnetic grains and spherules are heterogeneously distributed in sediments, but reported measurements of unique peaks in concentrations at the YD onset have yet to be reproduced. The magnetic grains are certainly just iron-rich detrital grains, whereas reported YD magnetic spherules are
International Nuclear Information System (INIS)
Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.
2005-01-01
Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)
International Nuclear Information System (INIS)
Husin Wagiran; Wan Mohd Nasir Wan Kadir
1997-01-01
In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results
Testing the gravitational instability hypothesis?
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1
Carlson, H. W.
1979-01-01
A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.
Kujawski, Joseph T.; Gliese, Ulrik B.; Cao, N. T.; Zeuch, M. A.; White, D.; Chornay, D. J; Lobell, J. V.; Avanov, L. A.; Barrie, A. C.; Mariano, A. J.;
2015-01-01
Each half of the Dual Electron Spectrometer (DES) of the Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission utilizes a microchannel plate Chevron stack feeding 16 separate detection channels each with a dedicated anode and amplifier/discriminator chip. The desire to detect events on a single channel with a temporal spacing of 100 ns and a fixed dead-time drove our decision to use an amplifier/discriminator with a very fast (GHz class) front end. Since the inherent frequency response of each pulse in the output of the DES microchannel plate system also has frequency components above a GHz, this produced a number of design constraints not normally expected in electronic systems operating at peak speeds of 10 MHz. Additional constraints are imposed by the geometry of the instrument requiring all 16 channels along with each anode and amplifier/discriminator to be packaged in a relatively small space. We developed an electrical model for board level interactions between the detector channels to allow us to design a board topology which gave us the best detection sensitivity and lowest channel to channel crosstalk. The amplifier/discriminator output was designed to prevent the outputs from one channel from producing triggers on the inputs of other channels. A number of Radio Frequency design techniques were then applied to prevent signals from other subsystems (e.g. the high voltage power supply, command and data handling board, and Ultraviolet stimulation for the MCP) from generating false events. These techniques enabled us to operate the board at its highest sensitivity when operated in isolation and at very high sensitivity when placed into the overall system.
Hypothesis test for synchronization: twin surrogates revisited.
Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf
2009-03-01
The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
Alakurtti, Sini; Keto-Timonen, Riikka; Virtanen, Sonja; Martínez, Pilar Ortiz; Laukkanen-Ninios, Riikka; Korkeala, Hannu
2016-06-01
A total of 253 multiple-locus variable-number tandem-repeat analysis (MLVA) types among 634 isolates were discovered while studying the genetic diversity of porcine Yersinia enterocolitica 4/O:3 isolates from eight different European countries. Six variable-number tandem-repeat (VNTR) loci V2A, V4, V5, V6, V7, and V9 were used to study the isolates from 82 farms in Belgium (n = 93, 7 farms), England (n = 41, 8 farms), Estonia (n = 106, 12 farms), Finland (n = 70, 13 farms), Italy (n = 111, 20 farms), Latvia (n = 66, 3 farms), Russia (n = 60, 10 farms), and Spain (n = 87, 9 farms). Cluster analysis revealed mainly country-specific clusters, and only one MLVA type consisting of two isolates was found from two countries: Russia and Italy. Also, farm-specific clusters were discovered, but same MLVA types could also be found from different farms. Analysis of multiple isolates originating either from the same tonsils (n = 4) or from the same farm, but 6 months apart, revealed both identical and different MLVA types. MLVA showed a very good discriminatory ability with a Simpson's discriminatory index (DI) of 0.989. DIs for VNTR loci V2A, V4, V5, V6, V7, and V9 were 0.916, 0.791, 0.901, 0.877, 0.912, and 0.785, respectively, when studying all isolates together, but variation was evident between isolates originating from different countries. Locus V4 in the Spanish isolates and locus V9 in the Latvian isolates did not differentiate (DI 0.000), and locus V9 in the English isolates showed very low discriminatory power (DI 0.049). The porcine Y. enterocolitica 4/O:3 isolates were diverse, but the variation in DI demonstrates that the well discriminating loci V2A, V5, V6, and V7 should be included in MLVA protocol when maximal discriminatory power is needed.
Hemmann, Jethro L.; Saurel, Olivier; Ochsner, Andrea M.; Stodden, Barbara K.; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A.
2016-01-01
Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a 13C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. PMID:26895963
Hemmann, Jethro L; Saurel, Olivier; Ochsner, Andrea M; Stodden, Barbara K; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A
2016-04-22
Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a (13)C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Bade, Richard; Bijlsma, Lubertus; Miller, Thomas H; Barron, Leon P; Sancho, Juan Vicente; Hernández, Felix
2015-12-15
The recent development of broad-scope high resolution mass spectrometry (HRMS) screening methods has resulted in a much improved capability for new compound identification in environmental samples. However, positive identifications at the ng/L concentration level rely on analytical reference standards for chromatographic retention time (tR) and mass spectral comparisons. Chromatographic tR prediction can play a role in increasing confidence in suspect screening efforts for new compounds in the environment, especially when standards are not available, but reliable methods are lacking. The current work focuses on the development of artificial neural networks (ANNs) for tR prediction in gradient reversed-phase liquid chromatography and applied along with HRMS data to suspect screening of wastewater and environmental surface water samples. Based on a compound tR dataset of >500 compounds, an optimized 4-layer back-propagation multi-layer perceptron model enabled predictions for 85% of all compounds to within 2min of their measured tR for training (n=344) and verification (n=100) datasets. To evaluate the ANN ability for generalization to new data, the model was further tested using 100 randomly selected compounds and revealed 95% prediction accuracy within the 2-minute elution interval. Given the increasing concern on the presence of drug metabolites and other transformation products (TPs) in the aquatic environment, the model was applied along with HRMS data for preliminary identification of pharmaceutically-related compounds in real samples. Examples of compounds where reference standards were subsequently acquired and later confirmed are also presented. To our knowledge, this work presents for the first time, the successful application of an accurate retention time predictor and HRMS data-mining using the largest number of compounds to preliminarily identify new or emerging contaminants in wastewater and surface waters. Copyright © 2015 Elsevier B.V. All rights
Ribéreau-Gayon, Agathe; Rando, Carolyn; Schuliar, Yves; Chapenoire, Stéphane; Crema, Enrico R; Claes, Julien; Seret, Bernard; Maleret, Vincent; Morgan, Ruth M
2017-03-01
Accurate determination of the origin and timing of trauma is key in medicolegal investigations when the cause and manner of death are unknown. However, distinction between criminal and accidental perimortem trauma and postmortem modifications can be challenging when facing unidentified trauma. Postmortem examination of the immersed victims of the Yemenia airplane crash (Comoros, 2009) demonstrated the challenges in diagnosing extensive unusual circular lesions found on the corpses. The objective of this study was to identify the origin and timing of occurrence (peri- or postmortem) of the lesions.A retrospective multidisciplinary study using autopsy reports (n = 113) and postmortem digital photos (n = 3 579) was conducted. Of the 113 victims recovered from the crash, 62 (54.9 %) presented unusual lesions (n = 560) with a median number of 7 (IQR 3 ∼ 13) and a maximum of 27 per corpse. The majority of lesions were elliptic (58 %) and had an area smaller than 10 cm 2 (82.1 %). Some lesions (6.8 %) also showed clear tooth notches on their edges. These findings identified most of the lesions as consistent with postmortem bite marks from cookiecutter sharks (Isistius spp.). It suggests that cookiecutter sharks were important agents in the degradation of the corpses and thus introduced potential cognitive bias in the research of the cause and manner of death. A novel set of evidence-based identification criteria for cookiecutter bite marks on human bodies is developed to facilitate more accurate medicolegal diagnosis of cookiecutter bites.
The Carnivore Connection Hypothesis: Revisited
Directory of Open Access Journals (Sweden)
Jennie C. Brand-Miller
2012-01-01
Full Text Available The “Carnivore Connection” hypothesizes that, during human evolution, a scarcity of dietary carbohydrate in diets with low plant : animal subsistence ratios led to insulin resistance providing a survival and reproductive advantage with selection of genes for insulin resistance. The selection pressure was relaxed at the beginning of the Agricultural Revolution when large quantities of cereals first entered human diets. The “Carnivore Connection” explains the high prevalence of intrinsic insulin resistance and type 2 diabetes in populations that transition rapidly from traditional diets with a low-glycemic load, to high-carbohydrate, high-glycemic index diets that characterize modern diets. Selection pressure has been relaxed longest in European populations, explaining a lower prevalence of insulin resistance and type 2 diabetes, despite recent exposure to famine and food scarcity. Increasing obesity and habitual consumption of high-glycemic-load diets worsens insulin resistance and increases the risk of type 2 diabetes in all populations.
Validity of Linder Hypothesis in Bric Countries
Directory of Open Access Journals (Sweden)
Rana Atabay
2016-03-01
Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.
Ancellet, Gerard; Daskalakis, Nikos; Raut, Jean Christophe; Quennehen, Boris; Ravetta, Francois; Hair, Jonathan; Tarasick, David; Schlager, Hans; Weinheimer, Andrew J.; Thompson, Anne M.;
2016-01-01
The goal of the paper are to: (1) present tropospheric ozone (O3) climatologies in summer 2008 based on a large amount of measurements, during the International Polar Year when the Polar Study using Aircraft, Remote Sensing, Surface Measurements, and Models of Climate Chemistry, Aerosols, and Transport (POLARCAT) campaigns were conducted (2) investigate the processes that determine O3 concentrations in two different regions (Canada and Greenland) that were thoroughly studied using measurements from 3 aircraft and 7 ozonesonde stations. This paper provides an integrated analysis of these observations and the discussion of the latitudinal and vertical variability of tropospheric ozone north of 55oN during this period is performed using a regional model (WFR-Chem). Ozone, CO and potential vorticity (PV) distributions are extracted from the simulation at the measurement locations. The model is able to reproduce the O3 latitudinal and vertical variability but a negative O3 bias of 6-15 ppbv is found in the free troposphere over 4 km, especially over Canada. Ozone average concentrations are of the order of 65 ppbv at altitudes above 4 km both over Canada and Greenland, while they are less than 50 ppbv in the lower troposphere. The relative influence of stratosphere-troposphere exchange (STE) and of ozone production related to the local biomass burning (BB) emissions is discussed using differences between average values of O3, CO and PV for Southern and Northern Canada or Greenland and two vertical ranges in the troposphere: 0-4 km and 4-8 km. For Canada, the model CO distribution and the weak correlation (less than 30%) of O3 and PV suggests that stratosphere troposphere exchange (STE) is not the major contribution to average tropospheric ozone at latitudes less than 70 deg N, due to the fact that local biomass burning (BB) emissions were significant during the 2008 summer period. Conversely over Greenland, significant STE is found according to the better O3 versus PV
A comparator-hypothesis account of biased contingency detection.
Vadillo, Miguel A; Barberia, Itxaso
2018-02-12
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection. Copyright © 2018 Elsevier B.V. All rights reserved.
Vascular Gene Expression: A Hypothesis
Directory of Open Access Journals (Sweden)
Angélica Concepción eMartínez-Navarro
2013-07-01
Full Text Available The phloem is the conduit through which photoassimilates are distributed from autotrophic to heterotrophic tissues and is involved in the distribution of signaling molecules that coordinate plant growth and responses to the environment. Phloem function depends on the coordinate expression of a large array of genes. We have previously identified conserved motifs in upstream regions of the Arabidopsis genes, encoding the homologs of pumpkin phloem sap mRNAs, displaying expression in vascular tissues. This tissue-specific expression in Arabidopsis is predicted by the overrepresentation of GA/CT-rich motifs in gene promoters. In this work we have searched for common motifs in upstream regions of the homologous genes from plants considered to possess a primitive vascular tissue (a lycophyte, as well as from others that lack a true vascular tissue (a bryophyte, and finally from chlorophytes. Both lycophyte and bryophyte display motifs similar to those found in Arabidopsis with a significantly low E-value, while the chlorophytes showed either a different conserved motif or no conserved motif at all. These results suggest that these same genes are expressed coordinately in non- vascular plants; this coordinate expression may have been one of the prerequisites for the development of conducting tissues in plants. We have also analyzed the phylogeny of conserved proteins that may be involved in phloem function and development. The presence of CmPP16, APL, FT and YDA in chlorophytes suggests the recruitment of ancient regulatory networks for the development of the vascular tissue during evolution while OPS is a novel protein specific to vascular plants.
Evolutionary hypothesis for Chiari type I malformation.
Fernandes, Yvens Barbosa; Ramina, Ricardo; Campos-Herrera, Cynthia Resende; Borges, Guilherme
2013-10-01
Chiari I malformation (CM-I) is classically defined as a cerebellar tonsillar herniation (≥5 mm) through the foramen magnum. A decreased posterior fossa volume, mainly due to basioccipital hypoplasia and sometimes platybasia, leads to posterior fossa overcrowding and consequently cerebellar herniation. Regardless of radiological findings, embryological genetic hypothesis or any other postulations, the real cause behind this malformation is yet not well-elucidated and remains largely unknown. The aim of this paper is to approach CM-I under a broader and new perspective, conjoining anthropology, genetics and neurosurgery, with special focus on the substantial changes that have occurred in the posterior cranial base through human evolution. Important evolutionary allometric changes occurred during brain expansion and genetics studies of human evolution demonstrated an unexpected high rate of gene flow interchange and possibly interbreeding during this process. Based upon this review we hypothesize that CM-I may be the result of an evolutionary anthropological imprint, caused by evolving species populations that eventually met each other and mingled in the last 1.7 million years. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hypothesis Testing in the Real World
Miller, Jeff
2017-01-01
Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…
Error probabilities in default Bayesian hypothesis testing
Gu, Xin; Hoijtink, Herbert; Mulder, J,
2016-01-01
This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for
Reassessing the Trade-off Hypothesis
DEFF Research Database (Denmark)
Rosas, Guillermo; Manzetti, Luigi
2015-01-01
Do economic conditions drive voters to punish politicians that tolerate corruption? Previous scholarly work contends that citizens in young democracies support corrupt governments that are capable of promoting good economic outcomes, the so-called trade-off hypothesis. We test this hypothesis based...
Mastery Learning and the Decreasing Variability Hypothesis.
Livingston, Jennifer A.; Gentile, J. Ronald
1996-01-01
This report results from studies that tested two variations of Bloom's decreasing variability hypothesis using performance on successive units of achievement in four graduate classrooms that used mastery learning procedures. Data do not support the decreasing variability hypothesis; rather, they show no change over time. (SM)
Social learning and evolution: the cultural intelligence hypothesis
van Schaik, Carel P.; Burkart, Judith M.
2011-01-01
If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer. PMID:21357223
Social learning and evolution: the cultural intelligence hypothesis.
van Schaik, Carel P; Burkart, Judith M
2011-04-12
If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer.
Koninck, Jean-Marie De
2009-01-01
Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n
Nodding syndrome—a new hypothesis and new direction for research
Directory of Open Access Journals (Sweden)
Robert Colebunders
2014-10-01
Full Text Available Nodding syndrome (NS is an unexplained neurological illness that mainly affects children aged between 5 and 15 years. NS has so far been reported from South Sudan, northern Uganda, and Tanzania, but in spite of extensive investigations, the aetiology remains unknown. We hypothesize that blackflies (Diptera: Simuliidae infected with Onchocerca volvulus microfilariae may also transmit another pathogen. This may be a novel neurotropic virus or an endosymbiont of the microfilariae, which causes not only NS, but also epilepsy without nodding. This hypothesis addresses many of the questions about NS that researchers have previously been unable to answer. An argument in favour of the hypothesis is the fact that in Uganda, the number of new NS cases decreased (with no new cases reported since 2013 after ivermectin coverage was increased and with the implementation of a programme of aerial spraying and larviciding of the large rivers where blackflies were breeding. If confirmed, our hypothesis will enable new strategies to control NS outbreaks.
The organisational structure of protein networks: revisiting the centrality-lethality hypothesis.
Raman, Karthik; Damaraju, Nandita; Joshi, Govind Krishna
2014-03-01
Protein networks, describing physical interactions as well as functional associations between proteins, have been unravelled for many organisms in the recent past. Databases such as the STRING provide excellent resources for the analysis of such networks. In this contribution, we revisit the organisation of protein networks, particularly the centrality-lethality hypothesis, which hypothesises that nodes with higher centrality in a network are more likely to produce lethal phenotypes on removal, compared to nodes with lower centrality. We consider the protein networks of a diverse set of 20 organisms, with essentiality information available in the Database of Essential Genes and assess the relationship between centrality measures and lethality. For each of these organisms, we obtained networks of high-confidence interactions from the STRING database, and computed network parameters such as degree, betweenness centrality, closeness centrality and pairwise disconnectivity indices. We observe that the networks considered here are predominantly disassortative. Further, we observe that essential nodes in a network have a significantly higher average degree and betweenness centrality, compared to the network average. Most previous studies have evaluated the centrality-lethality hypothesis for Saccharomyces cerevisiae and Escherichia coli; we here observe that the centrality-lethality hypothesis hold goods for a large number of organisms, with certain limitations. Betweenness centrality may also be a useful measure to identify essential nodes, but measures like closeness centrality and pairwise disconnectivity are not significantly higher for essential nodes.
Gaming the Law of Large Numbers
Hoffman, Thomas R.; Snapp, Bart
2012-01-01
Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…
Bennett, Ruth, Ed.; And Others
An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…
Indian Academy of Sciences (India)
Admin
Triangular number, figurate num- ber, rangoli, Brahmagupta–Pell equation, Jacobi triple product identity. Figure 1. The first four triangular numbers. Left: Anuradha S Garge completed her PhD from. Pune University in 2008 under the supervision of Prof. S A Katre. Her research interests include K-theory and number theory.
Directory of Open Access Journals (Sweden)
Schwarzweller Christoph
2015-02-01
Full Text Available In this article we introduce Proth numbers and prove two theorems on such numbers being prime [3]. We also give revised versions of Pocklington’s theorem and of the Legendre symbol. Finally, we prove Pepin’s theorem and that the fifth Fermat number is not prime.
An experimental test of the habitat-amount hypothesis for saproxylic beetles in a forested region.
Seibold, Sebastian; Bässler, Claus; Brandl, Roland; Fahrig, Lenore; Förster, Bernhard; Heurich, Marco; Hothorn, Torsten; Scheipl, Fabian; Thorn, Simon; Müller, Jörg
2017-06-01
The habitat-amount hypothesis challenges traditional concepts that explain species richness within habitats, such as the habitat-patch hypothesis, where species number is a function of patch size and patch isolation. It posits that effects of patch size and patch isolation are driven by effects of sample area, and thus that the number of species at a site is basically a function of the total habitat amount surrounding this site. We tested the habitat-amount hypothesis for saproxylic beetles and their habitat of dead wood by using an experiment comprising 190 plots with manipulated patch sizes situated in a forested region with a high variation in habitat amount (i.e., density of dead trees in the surrounding landscape). Although dead wood is a spatio-temporally dynamic habitat, saproxylic insects have life cycles shorter than the time needed for habitat turnover and they closely track their resource. Patch size was manipulated by adding various amounts of downed dead wood to the plots (~800 m³ in total); dead trees in the surrounding landscape (~240 km 2 ) were identified using airborne laser scanning (light detection and ranging). Over 3 yr, 477 saproxylic species (101,416 individuals) were recorded. Considering 20-1,000 m radii around the patches, local landscapes were identified as having a radius of 40-120 m. Both patch size and habitat amount in the local landscapes independently affected species numbers without a significant interaction effect, hence refuting the island effect. Species accumulation curves relative to cumulative patch size were not consistent with either the habitat-patch hypothesis or the habitat-amount hypothesis: several small dead-wood patches held more species than a single large patch with an amount of dead wood equal to the sum of that of the small patches. Our results indicate that conservation of saproxylic beetles in forested regions should primarily focus on increasing the overall amount of dead wood without considering its
Mendonça, J. Ricardo G.
2012-01-01
We define a new class of numbers based on the first occurrence of certain patterns of zeros and ones in the expansion of irracional numbers in a given basis and call them Sagan numbers, since they were first mentioned, in a special case, by the North-american astronomer Carl E. Sagan in his science-fiction novel "Contact." Sagan numbers hold connections with a wealth of mathematical ideas. We describe some properties of the newly defined numbers and indicate directions for further amusement.
Implications of the Bohm-Aharonov hypothesis
International Nuclear Information System (INIS)
Ghirardi, G.C.; Rimini, A.; Weber, T.
1976-01-01
It is proved that the Bohm-Aharonov hypothesis concerning largerly separated subsystems of composite quantum systems implies that it is impossible to express the dynamical evolution in terms of the density operator
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.; Shamma, Jeff S.
2014-01-01
incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well
Directory of Open Access Journals (Sweden)
Kopij Grzegorz
2017-12-01
Full Text Available During the years 1994–2009, the number of White Stork pairs breeding in the city of Wrocław (293 km2 fluctuated between 5 pairs in 1999 and 19 pairs 2004. Most nests were clumped in two sites in the Odra river valley. Two nests were located only cca. 1 km from the city hall. The fluctuations in numbers can be linked to the availability of feeding grounds and weather. In years when grass was mowed in the Odra valley, the number of White Storks was higher than in years when the grass was left unattended. Overall, the mean number of fledglings per successful pair during the years 1995–2009 was slightly higher in the rural than in the urban area. Contrary to expectation, the mean number of fledglings per successful pair was the highest in the year of highest population density. In two rural counties adjacent to Wrocław, the number of breeding pairs was similar to that in the city in 1994/95 (15 vs. 13 pairs. However, in 2004 the number of breeding pairs in the city almost doubled compared to that in the neighboring counties (10 vs. 19 pairs. After a sharp decline between 2004 and 2008, populations in both areas were similar in 2009 (5 vs. 4 pairs, but much lower than in 1994–1995. Wrocław is probably the only large city (>100,000 people in Poland, where the White Stork has developed a sizeable, although fluctuating, breeding population. One of the most powerful role the city-nesting White Storks may play is their ability to engage directly citizens with nature and facilitate in that way environmental education and awareness.
The (not so) Immortal Strand Hypothesis
Tomasetti, Cristian; Bozic, Ivana
2015-01-01
Background: Non-random segregation of DNA strands during stem cell replication has been proposed as a mechanism to minimize accumulated genetic errors in stem cells of rapidly dividing tissues. According to this hypothesis, an “immortal” DNA strand is passed to the stem cell daughter and not the more differentiated cell, keeping the stem cell lineage replication error-free. After it was introduced, experimental evidence both in favor and against the hypothesis has been presented. Principal...
Diamond, Harold G; Cheung, Man Ping
2016-01-01
"Generalized numbers" is a multiplicative structure introduced by A. Beurling to study how independent prime number theory is from the additivity of the natural numbers. The results and techniques of this theory apply to other systems having the character of prime numbers and integers; for example, it is used in the study of the prime number theorem (PNT) for ideals of algebraic number fields. Using both analytic and elementary methods, this book presents many old and new theorems, including several of the authors' results, and many examples of extremal behavior of g-number systems. Also, the authors give detailed accounts of the L^2 PNT theorem of J. P. Kahane and of the example created with H. L. Montgomery, showing that additive structure is needed for proving the Riemann hypothesis. Other interesting topics discussed are propositions "equivalent" to the PNT, the role of multiplicative convolution and Chebyshev's prime number formula for g-numbers, and how Beurling theory provides an interpretation of the ...
Petersen, T Kyle
2015-01-01
This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gamma-nonnegativity and real-rootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group. The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions. The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, ...
Indian Academy of Sciences (India)
Transfinite Numbers. What is Infinity? S M Srivastava. In a series of revolutionary articles written during the last quarter of the nineteenth century, the great Ger- man mathematician Georg Cantor removed the age-old mistrust of infinity and created an exceptionally beau- tiful and useful theory of transfinite numbers. This is.
A checklist to facilitate objective hypothesis testing in social psychology research.
Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J
2015-01-01
Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.
Energy Technology Data Exchange (ETDEWEB)
Arkesteijn, L; Van Huis, G; Reckman, E
1987-01-01
The Dutch government aims to realize a wind power capacity in The Netherlands of 1000 MW in the year 2000. Environmental impacts of the erection of a large number of 200 kW and 1 MW wind turbines are studied. Four siting models have been developed in which attention is paid to environmental and economic aspects, the possibilities to introduce the electric power into the national power grid and the availability and reliability of enough wind. Noise pollution and danger for birds are to be avoided. The choice between the construction of wind parks where a number of wind turbines is concentrated in a small area or a more dispersed construction is somewhat difficult if all relevant factors are to be taken into consideration. Without government's interference the target of 1000 MW in the year 2000 will probably not be attained. It is therefore desirable to practise an active energy policy in favor of wind energy, for which many ways are possible.
Evidence against the energetic cost hypothesis for the short introns in highly expressed genes
Directory of Open Access Journals (Sweden)
Niu Deng-Ke
2008-05-01
Full Text Available Abstract Background In animals, the moss Physcomitrella patens and the pollen of Arabidopsis thaliana, highly expressed genes have shorter introns than weakly expressed genes. A popular explanation for this is selection for transcription efficiency, which includes two sub-hypotheses: to minimize the energetic cost or to minimize the time cost. Results In an individual human, different organs may differ up to hundreds of times in cell number (for example, a liver versus a hypothalamus. Considered at the individual level, a gene specifically expressed in a large organ is actually transcribed tens or hundreds of times more than a gene with a similar expression level (a measure of mRNA abundance per cell specifically expressed in a small organ. According to the energetic cost hypothesis, the former should have shorter introns than the latter. However, in humans and mice we have not found significant differences in intron length between large-tissue/organ-specific genes and small-tissue/organ-specific genes with similar expression levels. Qualitative estimation shows that the deleterious effect (that is, the energetic burden of long introns in highly expressed genes is too negligible to be efficiently selected against in mammals. Conclusion The short introns in highly expressed genes should not be attributed to energy constraint. We evaluated evidence for the time cost hypothesis and other alternatives.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
Updating the mild encephalitis hypothesis of schizophrenia.
Bechter, K
2013-04-05
Schizophrenia seems to be a heterogeneous disorder. Emerging evidence indicates that low level neuroinflammation (LLNI) may not occur infrequently. Many infectious agents with low overall pathogenicity are risk factors for psychoses including schizophrenia and for autoimmune disorders. According to the mild encephalitis (ME) hypothesis, LLNI represents the core pathogenetic mechanism in a schizophrenia subgroup that has syndromal overlap with other psychiatric disorders. ME may be triggered by infections, autoimmunity, toxicity, or trauma. A 'late hit' and gene-environment interaction are required to explain major findings about schizophrenia, and both aspects would be consistent with the ME hypothesis. Schizophrenia risk genes stay rather constant within populations despite a resulting low number of progeny; this may result from advantages associated with risk genes, e.g., an improved immune response, which may act protectively within changing environments, although they are associated with the disadvantage of increased susceptibility to psychotic disorders. Specific schizophrenic symptoms may arise with instances of LLNI when certain brain functional systems are involved, in addition to being shaped by pre-existing liability factors. Prodrome phase and the transition to a diseased status may be related to LLNI processes emerging and varying over time. The variability in the course of schizophrenia resembles the varying courses of autoimmune disorders, which result from three required factors: genes, the environment, and the immune system. Preliminary criteria for subgrouping neurodevelopmental, genetic, ME, and other types of schizophrenias are provided. A rare example of ME schizophrenia may be observed in Borna disease virus infection. Neurodevelopmental schizophrenia due to early infections has been estimated by others to explain approximately 30% of cases, but the underlying pathomechanisms of transition to disease remain in question. LLNI (e.g. from
Ji, Caleb; Khovanova, Tanya; Park, Robin; Song, Angela
2015-01-01
In this paper, we consider a game played on a rectangular $m \\times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain. This paper enumerates the number of ways to break an $m \\times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we p...
Andrews, George E
1994-01-01
Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl
DEFF Research Database (Denmark)
Levin, Bruce R; McCall, Ingrid C.; Perrot, Veronique
2017-01-01
We postulate that the inhibition of growth and low rates of mortality of bacteria exposed to ribosome-binding antibiotics deemed bacteriostatic can be attributed almost uniquely to these drugs reducing the number of ribosomes contributing to protein synthesis, i.e., the number of effective......-targeting bacteriostatic antibiotics, the time before these bacteria start to grow again when the drugs are removed, referred to as the post-antibiotic effect (PAE), is markedly greater for constructs with fewer rrn operons than for those with more rrn operons. We interpret the results of these other experiments reported...... here as support for the hypothesis that the reduction in the effective number of ribosomes due to binding to these structures provides a sufficient explanation for the action of bacteriostatic antibiotics that target these structures....
Multiple hypothesis tracking for the cyber domain
Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.
2011-09-01
This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.
Aminoglycoside antibiotics and autism: a speculative hypothesis
Directory of Open Access Journals (Sweden)
Manev Hari
2001-10-01
Full Text Available Abstract Background Recently, it has been suspected that there is a relationship between therapy with some antibiotics and the onset of autism; but even more curious, some children benefited transiently from a subsequent treatment with a different antibiotic. Here, we speculate how aminoglycoside antibiotics might be associated with autism. Presentation We hypothesize that aminoglycoside antibiotics could a trigger the autism syndrome in susceptible infants by causing the stop codon readthrough, i.e., a misreading of the genetic code of a hypothetical critical gene, and/or b improve autism symptoms by correcting the premature stop codon mutation in a hypothetical polymorphic gene linked to autism. Testing Investigate, retrospectively, whether a link exists between aminoglycoside use (which is not extensive in children and the onset of autism symptoms (hypothesis "a", or between amino glycoside use and improvement of these symptoms (hypothesis "b". Whereas a prospective study to test hypothesis "a" is not ethically justifiable, a study could be designed to test hypothesis "b". Implications It should be stressed that at this stage no direct evidence supports our speculative hypothesis and that its main purpose is to initiate development of new ideas that, eventually, would improve our understanding of the pathobiology of autism.
Barnes, John
2016-01-01
In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Testing competing forms of the Milankovitch hypothesis
DEFF Research Database (Denmark)
Kaufmann, Robert K.; Juselius, Katarina
2016-01-01
We test competing forms of the Milankovitch hypothesis by estimating the coefficients and diagnostic statistics for a cointegrated vector autoregressive model that includes 10 climate variables and four exogenous variables for solar insolation. The estimates are consistent with the physical...... ice volume and solar insolation. The estimated adjustment dynamics show that solar insolation affects an array of climate variables other than ice volume, each at a unique rate. This implies that previous efforts to test the strong form of the Milankovitch hypothesis by examining the relationship...... that the latter is consistent with a weak form of the Milankovitch hypothesis and that it should be restated as follows: Internal climate dynamics impose perturbations on glacial cycles that are driven by solar insolation. Our results show that these perturbations are likely caused by slow adjustment between land...
Rejecting the equilibrium-point hypothesis.
Gottlieb, G L
1998-01-01
The lambda version of the equilibrium-point (EP) hypothesis as developed by Feldman and colleagues has been widely used and cited with insufficient critical understanding. This article offers a small antidote to that lack. First, the hypothesis implicitly, unrealistically assumes identical transformations of lambda into muscle tension for antagonist muscles. Without that assumption, its definitions of command variables R, C, and lambda are incompatible and an EP is not defined exclusively by R nor is it unaffected by C. Second, the model assumes unrealistic and unphysiological parameters for the damping properties of the muscles and reflexes. Finally, the theory lacks rules for two of its three command variables. A theory of movement should offer insight into why we make movements the way we do and why we activate muscles in particular patterns. The EP hypothesis offers no unique ideas that are helpful in addressing either of these questions.
The linear hypothesis and radiation carcinogenesis
International Nuclear Information System (INIS)
Roberts, P.B.
1981-10-01
An assumption central to most estimations of the carcinogenic potential of low levels of ionising radiation is that the risk always increases in direct proportion to the dose received. This assumption (the linear hypothesis) has been both strongly defended and attacked on several counts. It appears unlikely that conclusive, direct evidence on the validity of the hypothesis will be forthcoming. We review the major indirect arguments used in the debate. All of them are subject to objections that can seriously weaken their case. In the present situation, retention of the linear hypothesis as the basis of extrapolations from high to low dose levels can lead to excessive fears, over-regulation and unnecessarily expensive protection measures. To offset these possibilities, support is given to suggestions urging a cut-off dose, probably some fraction of natural background, below which risks can be deemed acceptable
Rayleigh's hypothesis and the geometrical optics limit.
Elfouhaily, Tanos; Hahn, Thomas
2006-09-22
The Rayleigh hypothesis (RH) is often invoked in the theoretical and numerical treatment of rough surface scattering in order to decouple the analytical form of the scattered field. The hypothesis stipulates that the scattered field away from the surface can be extended down onto the rough surface even though it is formed by solely up-going waves. Traditionally this hypothesis is systematically used to derive the Volterra series under the small perturbation method which is equivalent to the low-frequency limit. In this Letter we demonstrate that the RH also carries the high-frequency or the geometrical optics limit, at least to first order. This finding has never been explicitly derived in the literature. Our result comforts the idea that the RH might be an exact solution under some constraints in the general case of random rough surfaces and not only in the case of small-slope deterministic periodic gratings.
The zinc dyshomeostasis hypothesis of Alzheimer's disease.
Directory of Open Access Journals (Sweden)
Travis J A Craddock
Full Text Available Alzheimer's disease (AD is the most common form of dementia in the elderly. Hallmark AD neuropathology includes extracellular amyloid plaques composed largely of the amyloid-β protein (Aβ, intracellular neurofibrillary tangles (NFTs composed of hyper-phosphorylated microtubule-associated protein tau (MAP-tau, and microtubule destabilization. Early-onset autosomal dominant AD genes are associated with excessive Aβ accumulation, however cognitive impairment best correlates with NFTs and disrupted microtubules. The mechanisms linking Aβ and NFT pathologies in AD are unknown. Here, we propose that sequestration of zinc by Aβ-amyloid deposits (Aβ oligomers and plaques not only drives Aβ aggregation, but also disrupts zinc homeostasis in zinc-enriched brain regions important for memory and vulnerable to AD pathology, resulting in intra-neuronal zinc levels, which are either too low, or excessively high. To evaluate this hypothesis, we 1 used molecular modeling of zinc binding to the microtubule component protein tubulin, identifying specific, high-affinity zinc binding sites that influence side-to-side tubulin interaction, the sensitive link in microtubule polymerization and stability. We also 2 performed kinetic modeling showing zinc distribution in extra-neuronal Aβ deposits can reduce intra-neuronal zinc binding to microtubules, destabilizing microtubules. Finally, we 3 used metallomic imaging mass spectrometry (MIMS to show anatomically-localized and age-dependent zinc dyshomeostasis in specific brain regions of Tg2576 transgenic, mice, a model for AD. We found excess zinc in brain regions associated with memory processing and NFT pathology. Overall, we present a theoretical framework and support for a new theory of AD linking extra-neuronal Aβ amyloid to intra-neuronal NFTs and cognitive dysfunction. The connection, we propose, is based on β-amyloid-induced alterations in zinc ion concentration inside neurons affecting stability of
The zinc dyshomeostasis hypothesis of Alzheimer's disease.
Craddock, Travis J A; Tuszynski, Jack A; Chopra, Deepak; Casey, Noel; Goldstein, Lee E; Hameroff, Stuart R; Tanzi, Rudolph E
2012-01-01
Alzheimer's disease (AD) is the most common form of dementia in the elderly. Hallmark AD neuropathology includes extracellular amyloid plaques composed largely of the amyloid-β protein (Aβ), intracellular neurofibrillary tangles (NFTs) composed of hyper-phosphorylated microtubule-associated protein tau (MAP-tau), and microtubule destabilization. Early-onset autosomal dominant AD genes are associated with excessive Aβ accumulation, however cognitive impairment best correlates with NFTs and disrupted microtubules. The mechanisms linking Aβ and NFT pathologies in AD are unknown. Here, we propose that sequestration of zinc by Aβ-amyloid deposits (Aβ oligomers and plaques) not only drives Aβ aggregation, but also disrupts zinc homeostasis in zinc-enriched brain regions important for memory and vulnerable to AD pathology, resulting in intra-neuronal zinc levels, which are either too low, or excessively high. To evaluate this hypothesis, we 1) used molecular modeling of zinc binding to the microtubule component protein tubulin, identifying specific, high-affinity zinc binding sites that influence side-to-side tubulin interaction, the sensitive link in microtubule polymerization and stability. We also 2) performed kinetic modeling showing zinc distribution in extra-neuronal Aβ deposits can reduce intra-neuronal zinc binding to microtubules, destabilizing microtubules. Finally, we 3) used metallomic imaging mass spectrometry (MIMS) to show anatomically-localized and age-dependent zinc dyshomeostasis in specific brain regions of Tg2576 transgenic, mice, a model for AD. We found excess zinc in brain regions associated with memory processing and NFT pathology. Overall, we present a theoretical framework and support for a new theory of AD linking extra-neuronal Aβ amyloid to intra-neuronal NFTs and cognitive dysfunction. The connection, we propose, is based on β-amyloid-induced alterations in zinc ion concentration inside neurons affecting stability of polymerized
Neutrino number of the universe
International Nuclear Information System (INIS)
Kolb, E.W.
1981-01-01
The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references
Number names and number understanding
DEFF Research Database (Denmark)
Ejersbo, Lisser Rye; Misfeldt, Morten
2014-01-01
This paper concerns the results from the first year of a three-year research project involving the relationship between Danish number names and their corresponding digits in the canonical base 10 system. The project aims to develop a system to help the students’ understanding of the base 10 syste...... the Danish number names are more complicated than in other languages. Keywords: A research project in grade 0 and 1th in a Danish school, Base-10 system, two-digit number names, semiotic, cognitive perspectives....
The estrogen hypothesis of schizophrenia implicates glucose metabolism
DEFF Research Database (Denmark)
Olsen, Line; Hansen, Thomas; Jakobsen, Klaus D
2008-01-01
expression studies have indicated an equally large set of candidate genes that only partially overlap linkage genes. A thorough assessment, beyond the resolution of current GWA studies, of the disease risk conferred by the numerous schizophrenia candidate genes is a daunting and presently not feasible task....... We undertook these challenges by using an established clinical paradigm, the estrogen hypothesis of schizophrenia, as the criterion to select candidates among the numerous genes experimentally implicated in schizophrenia. Bioinformatic tools were used to build and priorities the signaling networks...... implicated by the candidate genes resulting from the estrogen selection. We identified ten candidate genes using this approach that are all active in glucose metabolism and particularly in the glycolysis. Thus, we tested the hypothesis that variants of the glycolytic genes are associated with schizophrenia...
Applicability of Taylor's hypothesis in thermally driven turbulence
Kumar, Abhishek; Verma, Mahendra K.
2018-04-01
In this paper, we show that, in the presence of large-scale circulation (LSC), Taylor's hypothesis can be invoked to deduce the energy spectrum in thermal convection using real-space probes, a popular experimental tool. We perform numerical simulation of turbulent convection in a cube and observe that the velocity field follows Kolmogorov's spectrum (k-5/3). We also record the velocity time series using real-space probes near the lateral walls. The corresponding frequency spectrum exhibits Kolmogorov's spectrum (f-5/3), thus validating Taylor's hypothesis with the steady LSC playing the role of a mean velocity field. The aforementioned findings based on real-space probes provide valuable inputs for experimental measurements used for studying the spectrum of convective turbulence.
An omnibus test for the global null hypothesis.
Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja
2018-01-01
Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.
A test of the domain-specific acculturation strategy hypothesis.
Miller, Matthew J; Yang, Minji; Lim, Robert H; Hui, Kayi; Choi, Na-Yeun; Fan, Xiaoyan; Lin, Li-Ling; Grome, Rebekah E; Farrell, Jerome A; Blackmon, Sha'kema
2013-01-01
Acculturation literature has evolved over the past several decades and has highlighted the dynamic ways in which individuals negotiate experiences in multiple cultural contexts. The present study extends this literature by testing M. J. Miller and R. H. Lim's (2010) domain-specific acculturation strategy hypothesis-that individuals might use different acculturation strategies (i.e., assimilated, bicultural, separated, and marginalized strategies; J. W. Berry, 2003) across behavioral and values domains-in 3 independent cluster analyses with Asian American participants. Present findings supported the domain-specific acculturation strategy hypothesis as 67% to 72% of participants from 3 independent samples using different strategies across behavioral and values domains. Consistent with theory, a number of acculturation strategy cluster group differences emerged across generational status, acculturative stress, mental health symptoms, and attitudes toward seeking professional psychological help. Study limitations and future directions for research are discussed.
Isotopic Resonance Hypothesis: Experimental Verification by Escherichia coli Growth Measurements
Xie, Xueshu; Zubarev, Roman A.
2015-03-01
Isotopic composition of reactants affects the rates of chemical and biochemical reactions. As a rule, enrichment of heavy stable isotopes leads to progressively slower reactions. But the recent isotopic resonance hypothesis suggests that the dependence of the reaction rate upon the enrichment degree is not monotonous. Instead, at some ``resonance'' isotopic compositions, the kinetics increases, while at ``off-resonance'' compositions the same reactions progress slower. To test the predictions of this hypothesis for the elements C, H, N and O, we designed a precise (standard error +/-0.05%) experiment that measures the parameters of bacterial growth in minimal media with varying isotopic composition. A number of predicted resonance conditions were tested, with significant enhancements in kinetics discovered at these conditions. The combined statistics extremely strongly supports the validity of the isotopic resonance phenomenon (p biotechnology, medicine, chemistry and other areas.
Directory of Open Access Journals (Sweden)
Theodore M. Porter
2012-12-01
Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.
Murty, M Ram
2014-01-01
This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.
Templates, Numbers & Watercolors.
Clemesha, David J.
1990-01-01
Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)
Indian Academy of Sciences (India)
this is a characteristic difference between finite and infinite sets and created an immensely useful branch of mathematics based on this idea which had a great impact on the whole of mathe- matics. For example, the question of what is a number (finite or infinite) is almost a philosophical one. However Cantor's work turned it ...
On the generalized gravi-magnetic hypothesis
International Nuclear Information System (INIS)
Massa, C.
1989-01-01
According to a generalization of the gravi-magnetic hypothesis (GMH) any neutral mass moving in a curvilinear path with respect to an inertial frame creates a magnetic field, dependent on the curvature radius of the path. A simple astrophysical consequence of the generalized GMH is suggested considering the special cases of binary pulsars and binary neutron stars
Remarks about the hypothesis of limiting fragmentation
International Nuclear Information System (INIS)
Chou, T.T.; Yang, C.N.
1987-01-01
Remarks are made about the hypothesis of limiting fragmentation. In particular, the concept of favored and disfavored fragment distribution is introduced. Also, a sum rule is proved leading to a useful quantity called energy-fragmentation fraction. (author). 11 refs, 1 fig., 2 tabs
Multiple hypothesis clustering in radar plot extraction
Huizing, A.G.; Theil, A.; Dorp, Ph. van; Ligthart, L.P.
1995-01-01
False plots and plots with inaccurate range and Doppler estimates may severely degrade the performance of tracking algorithms in radar systems. This paper describes how a multiple hypothesis clustering technique can be applied to mitigate the problems involved in plot extraction. The measures of
The (not so immortal strand hypothesis
Directory of Open Access Journals (Sweden)
Cristian Tomasetti
2015-03-01
Significance: Utilizing an approach that is fundamentally different from previous efforts to confirm or refute the immortal strand hypothesis, we provide evidence against non-random segregation of DNA during stem cell replication. Our results strongly suggest that parental DNA is passed randomly to stem cell daughters and provides new insight into the mechanism of DNA replication in stem cells.
A Developmental Study of the Infrahumanization Hypothesis
Martin, John; Bennett, Mark; Murray, Wayne S.
2008-01-01
Intergroup attitudes in children were examined based on Leyen's "infrahumanization hypothesis". This suggests that some uniquely human emotions, such as shame and guilt (secondary emotions), are reserved for the in-group, whilst other emotions that are not uniquely human and shared with animals, such as anger and pleasure (primary…
Morbidity and Infant Development: A Hypothesis.
Pollitt, Ernesto
1983-01-01
Results of a study conducted in 14 villages of Sui Lin Township, Taiwan, suggest the hypothesis that, under conditions of extreme economic impoverishment and among children within populations where energy protein malnutrition is endemic, there is an inverse relationship between incidence of morbidity in infancy and measures of motor and mental…
Diagnostic Hypothesis Generation and Human Judgment
Thomas, Rick P.; Dougherty, Michael R.; Sprenger, Amber M.; Harbison, J. Isaiah
2008-01-01
Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the…
Multi-hypothesis distributed stereo video coding
DEFF Research Database (Denmark)
Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren
2013-01-01
for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...
[Resonance hypothesis of heart rate variability origin].
Sheĭkh-Zade, Iu R; Mukhambetaliev, G Kh; Cherednik, I L
2009-09-01
A hypothesis is advanced of the heart rate variability being subjected to beat-to-beat regulation of cardiac cycle duration in order to ensure the resonance interaction between respiratory and own fluctuation of the arterial system volume for minimization of power expenses of cardiorespiratory system. Myogenic, parasympathetic and sympathetic machanisms of heart rate variability are described.
In Defense of Chi's Ontological Incompatibility Hypothesis
Slotta, James D.
2011-01-01
This article responds to an article by A. Gupta, D. Hammer, and E. F. Redish (2010) that asserts that M. T. H. Chi's (1992, 2005) hypothesis of an "ontological commitment" in conceptual development is fundamentally flawed. In this article, I argue that Chi's theoretical perspective is still very much intact and that the critique offered by Gupta…
Vacuum counterexamples to the cosmic censorship hypothesis
International Nuclear Information System (INIS)
Miller, B.D.
1981-01-01
In cylindrically symmetric vacuum spacetimes it is possible to specify nonsingular initial conditions such that timelike singularities will (necessarily) evolve from these conditions. Examples are given; the spacetimes are somewhat analogous to one of the spherically symmetric counterexamples to the cosmic censorship hypothesis
A novel hypothesis splitting method implementation for multi-hypothesis filters
DEFF Research Database (Denmark)
Bayramoglu, Enis; Ravn, Ole; Andersen, Nils Axel
2013-01-01
The paper presents a multi-hypothesis filter library featuring a novel method for splitting Gaussians into ones with smaller variances. The library is written in C++ for high performance and the source code is open and free1. The multi-hypothesis filters commonly approximate the distribution tran...
Kragten, N.; Rözer, J.
The income inequality hypothesis states that income inequality has a negative effect on individual’s health, partially because it reduces social trust. This article aims to critically assess the income inequality hypothesis by comparing several analytical strategies, namely OLS regression,
Einstein's Revolutionary Light-Quantum Hypothesis
Stuewer, Roger H.
2005-05-01
The paper in which Albert Einstein proposed his light-quantum hypothesis was the only one of his great papers of 1905 that he himself termed ``revolutionary.'' Contrary to widespread belief, Einstein did not propose his light-quantum hypothesis ``to explain the photoelectric effect.'' Instead, he based his argument for light quanta on the statistical interpretation of the second law of thermodynamics, with the photoelectric effect being only one of three phenomena that he offered as possible experimental support for it. I will discuss Einstein's light-quantum hypothesis of 1905 and his introduction of the wave-particle duality in 1909 and then turn to the reception of his work on light quanta by his contemporaries. We will examine the reasons that prominent physicists advanced to reject Einstein's light-quantum hypothesis in succeeding years. Those physicists included Robert A. Millikan, even though he provided convincing experimental proof of the validity of Einstein's equation of the photoelectric effect in 1915. The turning point came after Arthur Holly Compton discovered the Compton effect in late 1922, but even then Compton's discovery was contested both on experimental and on theoretical grounds. Niels Bohr, in particular, had never accepted the reality of light quanta and now, in 1924, proposed a theory, the Bohr-Kramers-Slater theory, which assumed that energy and momentum were conserved only statistically in microscopic interactions. Only after that theory was disproved experimentally in 1925 was Einstein's revolutionary light-quantum hypothesis generally accepted by physicists---a full two decades after Einstein had proposed it.
A Dopamine Hypothesis of Autism Spectrum Disorder.
Pavăl, Denis
2017-01-01
Autism spectrum disorder (ASD) comprises a group of neurodevelopmental disorders characterized by social deficits and stereotyped behaviors. While several theories have emerged, the pathogenesis of ASD remains unknown. Although studies report dopamine signaling abnormalities in autistic patients, a coherent dopamine hypothesis which could link neurobiology to behavior in ASD is currently lacking. In this paper, we present such a hypothesis by proposing that autistic behavior arises from dysfunctions in the midbrain dopaminergic system. We hypothesize that a dysfunction of the mesocorticolimbic circuit leads to social deficits, while a dysfunction of the nigrostriatal circuit leads to stereotyped behaviors. Furthermore, we discuss 2 key predictions of our hypothesis, with emphasis on clinical and therapeutic aspects. First, we argue that dopaminergic dysfunctions in the same circuits should associate with autistic-like behavior in nonautistic subjects. Concerning this, we discuss the case of PANDAS (pediatric autoimmune neuropsychiatric disorder associated with streptococcal infections) which displays behaviors similar to those of ASD, presumed to arise from dopaminergic dysfunctions. Second, we argue that providing dopamine modulators to autistic subjects should lead to a behavioral improvement. Regarding this, we present clinical studies of dopamine antagonists which seem to have improving effects on autistic behavior. Furthermore, we explore the means of testing our hypothesis by using neuroreceptor imaging, which could provide comprehensive evidence for dopamine signaling dysfunctions in autistic subjects. Lastly, we discuss the limitations of our hypothesis. Along these lines, we aim to provide a dopaminergic model of ASD which might lead to a better understanding of the ASD pathogenesis. © 2017 S. Karger AG, Basel.
Bongers, Frans; Poorter, Lourens; Hawthorne, William D; Sheil, Douglas
2009-08-01
The intermediate disturbance hypothesis (IDH) predicts local species diversity to be maximal at an intermediate level of disturbance. Developed to explain species maintenance and diversity patterns in species-rich ecosystems such as tropical forests, tests of IDH in tropical forest remain scarce, small-scale and contentious. We use an unprecedented large-scale dataset (2504 one-hectare plots and 331,567 trees) to examine whether IDH explains tree diversity variation within wet, moist and dry tropical forests, and we analyse the underlying mechanism by determining responses within functional species groups. We find that disturbance explains more variation in diversity of dry than wet tropical forests. Pioneer species numbers increase with disturbance, shade-tolerant species decrease and intermediate species are indifferent. While diversity indeed peaks at intermediate disturbance levels little variation is explained outside dry forests, and disturbance is less important for species richness patterns in wet tropical rain forests than previously thought.
van der Bijl, Wouter; Kolm, Niclas
2016-06-01
A growing number of studies have found that large brains may help animals survive by avoiding predation. These studies provide an alternative explanation for existing correlative evidence for one of the dominant hypotheses regarding the evolution of brain size in animals, the social brain hypothesis (SBH). The SBH proposes that social complexity is a major evolutionary driver of large brains. However, if predation both directly selects for large brains and higher levels of sociality, correlations between sociality and brain size may be spurious. We argue that tests of the SBH should take direct effects of predation into account, either by explicitly including them in comparative analyses or by pin-pointing the brain-behavior-fitness pathway through which the SBH operates. Existing data and theory on social behavior can then be used to identify precise candidate mechanisms and formulate new testable predictions. © 2016 WILEY Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Hao Sun
Full Text Available Tai people are widely distributed in Thailand, Laos and southwestern China and are a large population of Southeast Asia. Although most anthropologists and historians agree that modern Tai people are from southwestern China and northern Thailand, the place from which they historically migrated remains controversial. Three popular hypotheses have been proposed: northern origin hypothesis, southern origin hypothesis or an indigenous origin. We compared the genetic relationships between the Tai in China and their "siblings" to test different hypotheses by analyzing 10 autosomal microsatellites. The genetic data of 916 samples from 19 populations were analyzed in this survey. The autosomal STR data from 15 of the 19 populations came from our previous study (Lin et al., 2010. 194 samples from four additional populations were genotyped in this study: Han (Yunnan, Dai (Dehong, Dai (Yuxi and Mongolian. The results of genetic distance comparisons, genetic structure analyses and admixture analyses all indicate that populations from northern origin hypothesis have large genetic distances and are clearly differentiated from the Tai. The simulation-based ABC analysis also indicates this. The posterior probability of the northern origin hypothesis is just 0.04 [95%CI: (0.01-0.06]. Conversely, genetic relationships were very close between the Tai and populations from southern origin or an indigenous origin hypothesis. Simulation-based ABC analyses were also used to distinguish the southern origin hypothesis from the indigenous origin hypothesis. The results indicate that the posterior probability of the southern origin hypothesis [0.640, 95%CI: (0.524-0.757] is greater than that of the indigenous origin hypothesis [0.324, 95%CI: (0.211-0.438]. Therefore, we propose that the genetic evidence does not support the hypothesis of northern origin. Our genetic data indicate that the southern origin hypothesis has higher probability than the other two hypotheses
A Hypothesis-Driven Approach to Site Investigation
Nowak, W.
2008-12-01
Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle
International Nuclear Information System (INIS)
Todorov, T.D.
1980-01-01
The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed
Tests of the Giant Impact Hypothesis
Jones, J. H.
1998-01-01
The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.
The discovered preference hypothesis - an empirical test
DEFF Research Database (Denmark)
Lundhede, Thomas; Ladenburg, Jacob; Olsen, Søren Bøye
Using stated preference methods for valuation of non-market goods is known to be vulnerable to a range of biases. Some authors claim that these so-called anomalies in effect render the methods useless for the purpose. However, the Discovered Preference Hypothesis, as put forth by Plott [31], offers...... an nterpretation and explanation of biases which entails that the stated preference methods need not to be completely written off. In this paper we conduct a test for the validity and relevance of the DPH interpretation of biases. In a choice experiment concerning preferences for protection of Danish nature areas...... as respondents evaluate more and more choice sets. This finding supports the Discovered Preference Hypothesis interpretation and explanation of starting point bias....
The Hypothesis-Driven Physical Examination.
Garibaldi, Brian T; Olson, Andrew P J
2018-05-01
The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.
MOLIERE: Automatic Biomedical Hypothesis Generation System.
Sybrandt, Justin; Shtutman, Michael; Safro, Ilya
2017-08-01
Hypothesis generation is becoming a crucial time-saving technique which allows biomedical researchers to quickly discover implicit connections between important concepts. Typically, these systems operate on domain-specific fractions of public medical data. MOLIERE, in contrast, utilizes information from over 24.5 million documents. At the heart of our approach lies a multi-modal and multi-relational network of biomedical objects extracted from several heterogeneous datasets from the National Center for Biotechnology Information (NCBI). These objects include but are not limited to scientific papers, keywords, genes, proteins, diseases, and diagnoses. We model hypotheses using Latent Dirichlet Allocation applied on abstracts found near shortest paths discovered within this network, and demonstrate the effectiveness of MOLIERE by performing hypothesis generation on historical data. Our network, implementation, and resulting data are all publicly available for the broad scientific community.
The Method of Hypothesis in Plato's Philosophy
Directory of Open Access Journals (Sweden)
Malihe Aboie Mehrizi
2016-09-01
Full Text Available The article deals with the examination of method of hypothesis in Plato's philosophy. This method, respectively, will be examined in three dialogues of Meno, Phaedon and Republic in which it is explicitly indicated. It will be shown the process of change of Plato’s attitude towards the position and usage of the method of hypothesis in his realm of philosophy. In Meno, considering the geometry, Plato attempts to introduce a method that can be used in the realm of philosophy. But, ultimately in Republic, Plato’s special attention to the method and its importance in the philosophical investigations, leads him to revise it. Here, finally Plato introduces the particular method of philosophy, i.e., the dialectic
Debates—Hypothesis testing in hydrology: Introduction
Blöschl, Günter
2017-03-01
This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.
2014-12-15
This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.
Hypothesis testing of scientific Monte Carlo calculations
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Exploring heterogeneous market hypothesis using realized volatility
Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari
2013-04-01
This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.
Quadratic Forms and Semiclassical Eigenfunction Hypothesis for Flat Tori
T. Sardari, Naser
2018-03-01
Let Q( X) be any integral primitive positive definite quadratic form in k variables, where {k≥4}, and discriminant D. For any integer n, we give an upper bound on the number of integral solutions of Q( X) = n in terms of n, k, and D. As a corollary, we prove a conjecture of Lester and Rudnick on the small scale equidistribution of almost all functions belonging to any orthonormal basis of a given eigenspace of the Laplacian on the flat torus {T^d} for {d≥ 5}. This conjecture is motivated by the work of Berry [2,3] on the semiclassical eigenfunction hypothesis.
Supporting hypothesis generation by learners exploring an interactive computer simulation
van Joolingen, Wouter R.; de Jong, Ton
1992-01-01
Computer simulations provide environments enabling exploratory learning. Research has shown that these types of learning environments are promising applications of computer assisted learning but also that they introduce complex learning settings, involving a large number of learning processes. This
Water Taxation and the Double Dividend Hypothesis
Nicholas Kilimani
2014-01-01
The double dividend hypothesis contends that environmental taxes have the potential to yield multiple benefits for the economy. However, empirical evidence of the potential impacts of environmental taxation in developing countries is still limited. This paper seeks to contribute to the literature by exploring the impact of a water tax in a developing country context, with Uganda as a case study. Policy makers in Uganda are exploring ways of raising revenue by taxing environmental goods such a...
[Working memory, phonological awareness and spelling hypothesis].
Gindri, Gigiane; Keske-Soares, Márcia; Mota, Helena Bolli
2007-01-01
Working memory, phonological awareness and spelling hypothesis. To verify the relationship between working memory, phonological awareness and spelling hypothesis in pre-school children and first graders. Participants of this study were 90 students, belonging to state schools, who presented typical linguistic development. Forty students were preschoolers, with the average age of six and 50 students were first graders, with the average age of seven. Participants were submitted to an evaluation of the working memory abilities based on the Working Memory Model (Baddeley, 2000), involving phonological loop. Phonological loop was evaluated using the Auditory Sequential Test, subtest 5 of Illinois Test of Psycholinguistic Abilities (ITPA), Brazilian version (Bogossian & Santos, 1977), and the Meaningless Words Memory Test (Kessler, 1997). Phonological awareness abilities were investigated using the Phonological Awareness: Instrument of Sequential Assessment (CONFIAS - Moojen et al., 2003), involving syllabic and phonemic awareness tasks. Writing was characterized according to Ferreiro & Teberosky (1999). Preschoolers presented the ability of repeating sequences of 4.80 digits and 4.30 syllables. Regarding phonological awareness, the performance in the syllabic level was of 19.68 and in the phonemic level was of 8.58. Most of the preschoolers demonstrated to have a pre-syllabic writing hypothesis. First graders repeated, in average, sequences of 5.06 digits and 4.56 syllables. These children presented a phonological awareness of 31.12 in the syllabic level and of 16.18 in the phonemic level, and demonstrated to have an alphabetic writing hypothesis. The performance of working memory, phonological awareness and spelling level are inter-related, as well as being related to chronological age, development and scholarity.
Privacy on Hypothesis Testing in Smart Grids
Li, Zuxing; Oechtering, Tobias
2015-01-01
In this paper, we study the problem of privacy information leakage in a smart grid. The privacy risk is assumed to be caused by an unauthorized binary hypothesis testing of the consumer's behaviour based on the smart meter readings of energy supplies from the energy provider. Another energy supplies are produced by an alternative energy source. A controller equipped with an energy storage device manages the energy inflows to satisfy the energy demand of the consumer. We study the optimal ener...
Quantum effects and hypothesis of cosmic censorship
International Nuclear Information System (INIS)
Parnovskij, S.L.
1989-01-01
It is shown that filamentary characteristics with linear mass of less than 10 25 g/cm distort slightly the space-time at distances, exceeding Planck ones. Their formation doesn't change vacuum energy and doesn't lead to strong quantum radiation. Therefore, the problem of their occurrence can be considered within the framework of classical collapse. Quantum effects can be ignored when considering the problem of validity of cosmic censorship hypothesis
The (not so) immortal strand hypothesis.
Tomasetti, Cristian; Bozic, Ivana
2015-03-01
Non-random segregation of DNA strands during stem cell replication has been proposed as a mechanism to minimize accumulated genetic errors in stem cells of rapidly dividing tissues. According to this hypothesis, an "immortal" DNA strand is passed to the stem cell daughter and not the more differentiated cell, keeping the stem cell lineage replication error-free. After it was introduced, experimental evidence both in favor and against the hypothesis has been presented. Using a novel methodology that utilizes cancer sequencing data we are able to estimate the rate of accumulation of mutations in healthy stem cells of the colon, blood and head and neck tissues. We find that in these tissues mutations in stem cells accumulate at rates strikingly similar to those expected without the protection from the immortal strand mechanism. Utilizing an approach that is fundamentally different from previous efforts to confirm or refute the immortal strand hypothesis, we provide evidence against non-random segregation of DNA during stem cell replication. Our results strongly suggest that parental DNA is passed randomly to stem cell daughters and provides new insight into the mechanism of DNA replication in stem cells. Copyright © 2015. Published by Elsevier B.V.
A test of the orthographic recoding hypothesis
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Consumer health information seeking as hypothesis testing.
Keselman, Alla; Browne, Allen C; Kaufman, David R
2008-01-01
Despite the proliferation of consumer health sites, lay individuals often experience difficulty finding health information online. The present study attempts to understand users' information seeking difficulties by drawing on a hypothesis testing explanatory framework. It also addresses the role of user competencies and their interaction with internet resources. Twenty participants were interviewed about their understanding of a hypothetical scenario about a family member suffering from stable angina and then searched MedlinePlus consumer health information portal for information on the problem presented in the scenario. Participants' understanding of heart disease was analyzed via semantic analysis. Thematic coding was used to describe information seeking trajectories in terms of three key strategies: verification of the primary hypothesis, narrowing search within the general hypothesis area and bottom-up search. Compared to an expert model, participants' understanding of heart disease involved different key concepts, which were also differently grouped and defined. This understanding provided the framework for search-guiding hypotheses and results interpretation. Incorrect or imprecise domain knowledge led individuals to search for information on irrelevant sites, often seeking out data to confirm their incorrect initial hypotheses. Online search skills enhanced search efficiency, but did not eliminate these difficulties. Regardless of their web experience and general search skills, lay individuals may experience difficulty with health information searches. These difficulties may be related to formulating and evaluating hypotheses that are rooted in their domain knowledge. Informatics can provide support at the levels of health information portals, individual websites, and consumer education tools.
Varadhan, S R S
2016-01-01
The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.
Do road planners produce more 'honest numbers' than rail planners?
DEFF Research Database (Denmark)
Næss, Petter; Flyvbjerg, Bent; Buhl, Søren L.
2006-01-01
Based on a review of available data from a database on large-scale transport infrastructure projects, this paper investigates the hypothesis that traffic forecasts for road links in Europe are geographically biased with underestimated traffic volumes in metropolitan areas and overestimated traffic...... volumes in remote regions. The present data do not support this hypothesis. Since previous studies have shown a strong tendency to overestimated forecasts of the number of passengers on new rail projects, it could be speculated that road planners are more skilful and/or honest than rail planners. However......, during the period when the investigated projects were planned (up to the late 1980s), there were hardly any strong incentives for road planners to make biased forecasts in order to place their projects in a more flattering light. Future research might uncover whether the change from the ‘predict...
A Review of Multiple Hypothesis Testing in Otolaryngology Literature
Kirkham, Erin M.; Weaver, Edward M.
2018-01-01
Objective Multiple hypothesis testing (or multiple testing) refers to testing more than one hypothesis within a single analysis, and can inflate the Type I error rate (false positives) within a study. The aim of this review was to quantify multiple testing in recent large clinical studies in the otolaryngology literature and to discuss strategies to address this potential problem. Data sources Original clinical research articles with >100 subjects published in 2012 in the four general otolaryngology journals with the highest Journal Citation Reports 5-year impact factors. Review methods Articles were reviewed to determine whether the authors tested greater than five hypotheses in at least one family of inferences. For the articles meeting this criterion for multiple testing, Type I error rates were calculated and statistical correction was applied to the reported results. Results Of the 195 original clinical research articles reviewed, 72% met the criterion for multiple testing. Within these studies, there was a mean 41% chance of a Type I error and, on average, 18% of significant results were likely to be false positives. After the Bonferroni correction was applied, only 57% of significant results reported within the articles remained significant. Conclusion Multiple testing is common in recent large clinical studies in otolaryngology and deserves closer attention from researchers, reviewers and editors. Strategies for adjusting for multiple testing are discussed. PMID:25111574
Updating the lamellar hypothesis of hippocampal organization
Directory of Open Access Journals (Sweden)
Robert S Sloviter
2012-12-01
Full Text Available In 1971, Andersen and colleagues proposed that excitatory activity in the entorhinal cortex propagates topographically to the dentate gyrus, and on through a trisynaptic circuit lying within transverse hippocampal slices or lamellae [Andersen, Bliss, and Skrede. 1971. Lamellar organization of hippocampal pathways. Exp Brain Res 13, 222-238]. In this way, a relatively simple structure might mediate complex functions in a manner analogous to the way independent piano keys can produce a nearly infinite variety of unique outputs. The lamellar hypothesis derives primary support from the lamellar distribution of dentate granule cell axons (the mossy fibers, which innervate dentate hilar neurons and area CA3 pyramidal cells and interneurons within the confines of a thin transverse hippocampal segment. Following the initial formulation of the lamellar hypothesis, anatomical studies revealed that unlike granule cells, hilar mossy cells, CA3 pyramidal cells, and Layer II entorhinal cells all form axonal projections that are more divergent along the longitudinal axis than the clearly lamellar mossy fiber pathway. The existence of pathways with translamellar distribution patterns has been interpreted, incorrectly in our view, as justifying outright rejection of the lamellar hypothesis [Amaral and Witter. 1989. The three-dimensional organization of the hippocampal formation: a review of anatomical data. Neuroscience 31, 571-591]. We suggest that the functional implications of longitudinally-projecting axons depend not on whether they exist, but on what they do. The observation that focal granule cell layer discharges normally inhibit, rather than excite, distant granule cells suggests that longitudinal axons in the dentate gyrus may mediate "lateral" inhibition and define lamellar function, rather than undermine it. In this review, we attempt a reconsideration of the evidence that most directly impacts the physiological concept of hippocampal lamellar
Hypothesis Testing as an Act of Rationality
Nearing, Grey
2017-04-01
Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.
The conscious access hypothesis: Explaining the consciousness.
Prakash, Ravi
2008-01-01
The phenomenon of conscious awareness or consciousness is complicated but fascinating. Although this concept has intrigued the mankind since antiquity, exploration of consciousness from scientific perspectives is not very old. Among myriad of theories regarding nature, functions and mechanism of consciousness, off late, cognitive theories have received wider acceptance. One of the most exciting hypotheses in recent times has been the "conscious access hypotheses" based on the "global workspace model of consciousness". It underscores an important property of consciousness, the global access of information in cerebral cortex. Present article reviews the "conscious access hypothesis" in terms of its theoretical underpinnings as well as experimental supports it has received.
RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS
Directory of Open Access Journals (Sweden)
Nicolae-Marius JULA
2017-05-01
Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.
Confluence Model or Resource Dilution Hypothesis?
DEFF Research Database (Denmark)
Jæger, Mads
have a negative effect on educational attainment most studies cannot distinguish empirically between the CM and the RDH. In this paper, I use the different theoretical predictions in the CM and the RDH on the role of cognitive ability as a partial or complete mediator of the sibship size effect......Studies on family background often explain the negative effect of sibship size on educational attainment by one of two theories: the Confluence Model (CM) or the Resource Dilution Hypothesis (RDH). However, as both theories – for substantively different reasons – predict that sibship size should...
Set theory and the continuum hypothesis
Cohen, Paul J
2008-01-01
This exploration of a notorious mathematical problem is the work of the man who discovered the solution. The independence of the continuum hypothesis is the focus of this study by Paul J. Cohen. It presents not only an accessible technical explanation of the author's landmark proof but also a fine introduction to mathematical logic. An emeritus professor of mathematics at Stanford University, Dr. Cohen won two of the most prestigious awards in mathematics: in 1964, he was awarded the American Mathematical Society's Bôcher Prize for analysis; and in 1966, he received the Fields Medal for Logic.
Statistical hypothesis testing with SAS and R
Taeger, Dirk
2014-01-01
A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the
Tests of the salt-nuclei hypothesis of rain formation
Energy Technology Data Exchange (ETDEWEB)
Woodcock, A H; Blanchard, D C
1955-01-01
Atmospheric chlorides in sea-salt nuclei and the chlorides dissolved in shower rainwaters were recently measured in Hawaii. A comparison of these measurements reveals the remarkable fact that the weight of chloride present in a certain number of nuclei in a cubic meter of clear air tends to be equal to the weight of chloride dissolved in an equal number of raindrops in a cubic meter of rainy air. This result is explained as an indication that the raindrops grow on the salt nuclei in some manner which prevents a marked change in the distribution of these nuclei during the drop-growth process. The data presented add new evidence in further support of the salt-nuclei raindrop hypothesis previously proposed by the first author.
Independent component analysis in non-hypothesis driven metabolomics
DEFF Research Database (Denmark)
Li, Xiang; Hansen, Jakob; Zhao, Xinjie
2012-01-01
In a non-hypothesis driven metabolomics approach plasma samples collected at six different time points (before, during and after an exercise bout) were analyzed by gas chromatography-time of flight mass spectrometry (GC-TOF MS). Since independent component analysis (ICA) does not need a priori...... information on the investigated process and moreover can separate statistically independent source signals with non-Gaussian distribution, we aimed to elucidate the analytical power of ICA for the metabolic pattern analysis and the identification of key metabolites in this exercise study. A novel approach...... based on descriptive statistics was established to optimize ICA model. In the GC-TOF MS data set the number of principal components after whitening and the number of independent components of ICA were optimized and systematically selected by descriptive statistics. The elucidated dominating independent...
Brown, Stephen B R E; van Steenbergen, Henk; Kedar, Tomer; Nieuwenhuis, Sander
2014-01-01
An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part) simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect) decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.
Directory of Open Access Journals (Sweden)
Stephen B.R.E. Brown
2014-01-01
Full Text Available An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.
Hypothesis-driven physical examination curriculum.
Allen, Sharon; Olson, Andrew; Menk, Jeremiah; Nixon, James
2017-12-01
Medical students traditionally learn physical examination skills as a rote list of manoeuvres. Alternatives like hypothesis-driven physical examination (HDPE) may promote students' understanding of the contribution of physical examination to diagnostic reasoning. We sought to determine whether first-year medical students can effectively learn to perform a physical examination using an HDPE approach, and then tailor the examination to specific clinical scenarios. Medical students traditionally learn physical examination skills as a rote list of manoeuvres CONTEXT: First-year medical students at the University of Minnesota were taught both traditional and HDPE approaches during a required 17-week clinical skills course in their first semester. The end-of-course evaluation assessed HDPE skills: students were assigned one of two cardiopulmonary cases. Each case included two diagnostic hypotheses. During an interaction with a standardised patient, students were asked to select physical examination manoeuvres in order to make a final diagnosis. Items were weighted and selection order was recorded. First-year students with minimal pathophysiology performed well. All students selected the correct diagnosis. Importantly, students varied the order when selecting examination manoeuvres depending on the diagnoses under consideration, demonstrating early clinical decision-making skills. An early introduction to HDPE may reinforce physical examination skills for hypothesis generation and testing, and can foster early clinical decision-making skills. This has important implications for further research in physical examination instruction. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
A default Bayesian hypothesis test for mediation.
Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan
2015-03-01
In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).
Inoculation stress hypothesis of environmental enrichment.
Crofton, Elizabeth J; Zhang, Yafang; Green, Thomas A
2015-02-01
One hallmark of psychiatric conditions is the vast continuum of individual differences in susceptibility vs. resilience resulting from the interaction of genetic and environmental factors. The environmental enrichment paradigm is an animal model that is useful for studying a range of psychiatric conditions, including protective phenotypes in addiction and depression models. The major question is how environmental enrichment, a non-drug and non-surgical manipulation, can produce such robust individual differences in such a wide range of behaviors. This paper draws from a variety of published sources to outline a coherent hypothesis of inoculation stress as a factor producing the protective enrichment phenotypes. The basic tenet suggests that chronic mild stress from living in a complex environment and interacting non-aggressively with conspecifics can inoculate enriched rats against subsequent stressors and/or drugs of abuse. This paper reviews the enrichment phenotypes, mulls the fundamental nature of environmental enrichment vs. isolation, discusses the most appropriate control for environmental enrichment, and challenges the idea that cortisol/corticosterone equals stress. The intent of the inoculation stress hypothesis of environmental enrichment is to provide a scaffold with which to build testable hypotheses for the elucidation of the molecular mechanisms underlying these protective phenotypes and thus provide new therapeutic targets to treat psychiatric/neurological conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Athlete's Heart: Is the Morganroth Hypothesis Obsolete?
Haykowsky, Mark J; Samuel, T Jake; Nelson, Michael D; La Gerche, Andre
2018-05-01
In 1975, Morganroth and colleagues reported that the increased left ventricular (LV) mass in highly trained endurance athletes versus nonathletes was primarily due to increased end-diastolic volume while the increased LV mass in resistance trained athletes was solely due to an increased LV wall thickness. Based on the divergent remodelling patterns observed, Morganroth and colleagues hypothesised that the increased "volume" load during endurance exercise may be similar to that which occurs in patients with mitral or aortic regurgitation while the "pressure" load associated with performing a Valsalva manoeuvre (VM) during resistance exercise may mimic the stress imposed on the heart by systemic hypertension or aortic stenosis. Despite widespread acceptance of the four-decade old Morganroth hypothesis in sports cardiology, some investigators have questioned whether such a divergent "athlete's heart" phenotype exists. Given this uncertainty, the purpose of this brief review is to re-evaluate the Morganroth hypothesis regarding: i) the acute effects of resistance exercise performed with a brief VM on LV wall stress, and the patterns of LV remodelling in resistance-trained athletes; ii) the acute effects of endurance exercise on biventricular wall stress, and the time course and pattern of LV and right ventricular (RV) remodelling with endurance training; and iii) the value of comparing "loading" conditions between athletes and patients with cardiac pathology. Copyright © 2018. Published by Elsevier B.V.
The Debt Overhang Hypothesis: Evidence from Pakistan
Directory of Open Access Journals (Sweden)
Shah Muhammad Imran
2016-04-01
Full Text Available This study investigates the debt overhang hypothesis for Pakistan in the period 1960-2007. The study examines empirically the dynamic behaviour of GDP, debt services, the employed labour force and investment using the time series concepts of unit roots, cointegration, error correlation and causality. Our findings suggest that debt-servicing has a negative impact on the productivity of both labour and capital, and that in turn has adversely affected economic growth. By severely constraining the ability of the country to service debt, this lends support to the debt-overhang hypothesis in Pakistan. The long run relation between debt services and economic growth implies that future increases in output will drain away in form of high debt service payments to lender country as external debt acts like a tax on output. More specifically, foreign creditors will benefit more from the rise in productivity than will domestic producers and labour. This suggests that domestic labour and capital are the ultimate losers from this heavy debt burden.
Roots and Route of the Artification Hypothesis
Directory of Open Access Journals (Sweden)
Ellen Dissanayake
2017-08-01
Full Text Available Over four decades, my ideas about the arts in human evolution have themselves evolved, from an original notion of art as a human behaviour of “making special” to a full-fledged hypothesis of artification. A summary of the gradual developmental path (or route of the hypothesis, based on ethological principles and concepts, is given, and an argument presented in which artification is described as an exaptation whose roots lie in adaptive features of ancestral mother–infant interaction that contributed to infant survival and maternal reproductive success. I show how the interaction displays features of a ritualised behavior whose operations (formalization, repetition, exaggeration, and elaboration can be regarded as characteristic elements of human ritual ceremonies as well as of art (including song, dance, performance, literary language, altered surroundings, and other examples of making ordinary sounds, movement, language, environments, objects, and bodies extraordinary. Participation in these behaviours in ritual practices served adaptive ends in early Homo by coordinating brain and body states, and thereby emotionally bonding members of a group in common cause as well as reducing existential anxiety in individuals. A final section situates artification within contemporary philosophical and popular ideas of art, claiming that artifying is not a synonym for or definition of art but foundational to any evolutionary discussion of artistic/aesthetic behaviour.
Hypothesis: does ochratoxin A cause testicular cancer?
Schwartz, Gary G
2002-02-01
Little is known about the etiology of testicular cancer, which is the most common cancer among young men. Epidemiologic data point to a carcinogenic exposure in early life or in utero, but the nature of the exposure is unknown. We hypothesize that the mycotoxin, ochratoxin A, is a cause of testicular cancer. Ochratoxin A is a naturally occurring contaminant of cereals, pigmeat, and other foods and is a known genotoxic carcinogen in animals. The major features of the descriptive epidemiology of testicular cancer (a high incidence in northern Europe, increasing incidence over time, and associations with high socioeconomic status, and with poor semen quality) are all associated with exposure to ochratoxin A. Exposure of animals to ochratoxin A via the diet or via in utero transfer induces adducts in testicular DNA. We hypothesize that consumption of foods contaminated with ochratoxin A during pregnancy and/or childhood induces lesions in testicular DNA and that puberty promotes these lesions to testicular cancer. We tested the ochratoxin A hypothesis using ecologic data on the per-capita consumption of cereals, coffee, and pigmeat, the principal dietary sources of ochratoxin A. Incidence rates for testicular cancer in 20 countries were significantly correlated with the per-capita consumption of coffee and pigmeat (r = 0.49 and 0.54, p = 0.03 and 0.01). The ochratoxin A hypothesis offers a coherent explanation for much of the descriptive epidemiology of testicular cancer and suggests new avenues for analytic research.
Clonal mutations in primary human glial tumors: evidence in support of the mutator hypothesis
Directory of Open Access Journals (Sweden)
Sarkar Chitra
2007-10-01
Full Text Available Abstract Background A verifiable consequence of the mutator hypothesis is that even low grade neoplasms would accumulate a large number of mutations that do not influence the tumor phenotype (clonal mutations. In this study, we have attempted to quantify the number of clonal mutations in primary human gliomas of astrocytic cell origin. These alterations were identified in tumor tissue, microscopically confirmed to have over 70% neoplastic cells. Methods Random Amplified Polymorphic DNA (RAPD analysis was performed using a set of fifteen 10-mer primers of arbitrary but definite sequences in 17 WHO grade II astrocytomas (low grade diffuse astrocytoma or DA and 16 WHO grade IV astrocytomas (Glioblastoma Multiforme or GBM. The RAPD profile of the tumor tissue was compared with that of the leucocyte DNA of the same patient and alteration(s scored. A quantitative estimate of the overall genomic changes in these tumors was obtained by 2 different modes of calculation. Results The overall change in the tumors was estimated to be 4.24% in DA and 2.29% in GBM by one method and 11.96% and 6.03% in DA and GBM respectively by the other. The difference between high and lower grade tumors was statistically significant by both methods. Conclusion This study demonstrates the presence of extensive clonal mutations in gliomas, more in lower grade. This is consistent with our earlier work demonstrating that technique like RAPD analysis, unbiased for locus, is able to demonstrate more intra-tumor genetic heterogeneity in lower grade gliomas compared to higher grade. The results support the mutator hypothesis proposed by Loeb.
Clonal mutations in primary human glial tumors: evidence in support of the mutator hypothesis
International Nuclear Information System (INIS)
Misra, Anjan; Chattopadhyay, Parthaprasad; Chosdol, Kunzang; Sarkar, Chitra; Mahapatra, Ashok K; Sinha, Subrata
2007-01-01
A verifiable consequence of the mutator hypothesis is that even low grade neoplasms would accumulate a large number of mutations that do not influence the tumor phenotype (clonal mutations). In this study, we have attempted to quantify the number of clonal mutations in primary human gliomas of astrocytic cell origin. These alterations were identified in tumor tissue, microscopically confirmed to have over 70% neoplastic cells. Random Amplified Polymorphic DNA (RAPD) analysis was performed using a set of fifteen 10-mer primers of arbitrary but definite sequences in 17 WHO grade II astrocytomas (low grade diffuse astrocytoma or DA) and 16 WHO grade IV astrocytomas (Glioblastoma Multiforme or GBM). The RAPD profile of the tumor tissue was compared with that of the leucocyte DNA of the same patient and alteration(s) scored. A quantitative estimate of the overall genomic changes in these tumors was obtained by 2 different modes of calculation. The overall change in the tumors was estimated to be 4.24% in DA and 2.29% in GBM by one method and 11.96% and 6.03% in DA and GBM respectively by the other. The difference between high and lower grade tumors was statistically significant by both methods. This study demonstrates the presence of extensive clonal mutations in gliomas, more in lower grade. This is consistent with our earlier work demonstrating that technique like RAPD analysis, unbiased for locus, is able to demonstrate more intra-tumor genetic heterogeneity in lower grade gliomas compared to higher grade. The results support the mutator hypothesis proposed by Loeb
Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G
2012-10-10
Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.
Lee, Jun-Ki; Kwon, Yongju
2012-01-01
Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…
The planet beyond the plume hypothesis
Smith, Alan D.; Lewis, Charles
1999-12-01
Acceptance of the theory of plate tectonics was accompanied by the rise of the mantle plume/hotspot concept which has come to dominate geodynamics from its use both as an explanation for the origin of intraplate volcanism and as a reference frame for plate motions. However, even with a large degree of flexibility permitted in plume composition, temperature, size, and depth of origin, adoption of any limited number of hotspots means the plume model cannot account for all occurrences of the type of volcanism it was devised to explain. While scientific protocol would normally demand that an alternative explanation be sought, there have been few challenges to "plume theory" on account of a series of intricate controls set up by the plume model which makes plumes seem to be an essential feature of the Earth. The hotspot frame acts not only as a reference but also controls plate tectonics. Accommodating plumes relegates mantle convection to a weak, sluggish effect such that basal drag appears as a minor, resisting force, with plates having to move themselves by boundary forces and continents having to be rifted by plumes. Correspondingly, the geochemical evolution of the mantle is controlled by the requirement to isolate subducted crust into plume sources which limits potential buffers on the composition of the MORB-source to plume- or lower mantle material. Crustal growth and Precambrian tectonics are controlled by interpretations of greenstone belts as oceanic plateaus generated by plumes. Challenges to any aspect of the plume model are thus liable to be dismissed unless a counter explanation is offered across the geodynamic spectrum influenced by "plume theory". Nonetheless, an alternative synthesis can be made based on longstanding petrological evidence for derivation of intraplate volcanism from volatile-bearing sources (wetspots) in conjunction with concepts dismissed for being incompatible or superfluous to "plume theory". In the alternative Earth, the sources for
Artistic talent in dyslexia--a hypothesis.
Chakravarty, Ambar
2009-10-01
The present article hints at a curious neurocognitive phenomenon of development of artistic talents in some children with dyslexia. The article also takes note of the phenomenon of creating in the midst of language disability as observed in the lives of such creative people like Leonardo da Vinci and Albert Einstein who were most probably affected with developmental learning disorders. It has been hypothesised that a developmental delay in the dominant hemisphere most likely 'disinhibits' the non-dominant parietal lobe to unmask talents, artistic or otherwise, in some such individuals. The present hypothesis follows the phenomenon of paradoxical functional facilitation described earlier. It has been suggested that children with learning disorders be encouraged to develop such hidden talents to full capacity, rather than be subjected to overemphasising on the correction of the disturbed coded symbol operations, in remedial training.
Statistical hypothesis tests of some micrometeorological observations
International Nuclear Information System (INIS)
SethuRaman, S.; Tichler, J.
1977-01-01
Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality
The hexagon hypothesis: Six disruptive scenarios.
Burtles, Jim
2015-01-01
This paper aims to bring a simple but effective and comprehensive approach to the development, delivery and monitoring of business continuity solutions. To ensure that the arguments and principles apply across the board, the paper sticks to basic underlying concepts rather than sophisticated interpretations. First, the paper explores what exactly people are defending themselves against. Secondly, the paper looks at how defences should be set up. Disruptive events tend to unfold in phases, each of which invites a particular style of protection, ranging from risk management through to business continuity to insurance cover. Their impact upon any business operation will fall into one of six basic scenarios. The hexagon hypothesis suggests that everyone should be prepared to deal with each of these six disruptive scenarios and it provides them with a useful benchmark for business continuity.
Novae, supernovae, and the island universe hypothesis
International Nuclear Information System (INIS)
Van Den Bergh, S.
1988-01-01
Arguments in Curtis's (1917) paper related to the island universe hypothesis and the existence of novae in spiral nebulae are considered. It is noted that the maximum magnitude versus rate-of-decline relation for novae may be the best tool presently available for the calibration of the extragalactic distance scale. Light curve observations of six novae are used to determine a distance of 18.6 + or - 3.5 MPc to the Virgo cluster. Results suggest that Type Ia supernovae cannot easily be used as standard candles, and that Type II supernovae are unsuitable as distance indicators. Factors other than precursor mass are probably responsible for determining the ultimate fate of evolving stars. 83 references
On the immunostimulatory hypothesis of cancer
Directory of Open Access Journals (Sweden)
Juan Bruzzo
2011-12-01
Full Text Available There is a rather generalized belief that the worst possible outcome for the application of immunological therapies against cancer is a null effect on tumor growth. However, a significant body of evidence summarized in the immunostimulatory hypothesis of cancer suggests that, upon certain circumstances, the growth of incipient and established tumors can be accelerated rather than inhibited by the immune response supposedly mounted to limit tumor growth. In order to provide more compelling evidence of this proposition, we have explored the growth behavior characteristics of twelve murine tumors -most of them of spontaneous origin- arisen in the colony of our laboratory, in putatively immunized and control mice. Using classical immunization procedures, 8 out of 12 tumors were actually stimulated in "immunized" mice while the remaining 4 were neither inhibited nor stimulated. Further, even these apparently non-antigenic tumors could reveal some antigenicity if more stringent than classical immunization procedures were used. This possibility was suggested by the results obtained with one of these four apparently non-antigenic tumors: the LB lymphoma. In effect, upon these stringent immunization pretreatments, LB was slightly inhibited or stimulated, depending on the titer of the immune reaction mounted against the tumor, with higher titers rendering inhibition and lower titers rendering tumor stimulation. All the above results are consistent with the immunostimulatory hypothesis that entails the important therapeutic implications -contrary to the orthodoxy- that, anti-tumor vaccines may run a real risk of doing harm if the vaccine-induced immunity is too weak to move the reaction into the inhibitory part of the immune response curve and that, a slight and prolonged immunodepression -rather than an immunostimulation- might interfere with the progression of some tumors and thus be an aid to cytotoxic therapies.
The Stress Acceleration Hypothesis of Nightmares
Directory of Open Access Journals (Sweden)
Tore Nielsen
2017-06-01
Full Text Available Adverse childhood experiences can deleteriously affect future physical and mental health, increasing risk for many illnesses, including psychiatric problems, sleep disorders, and, according to the present hypothesis, idiopathic nightmares. Much like post-traumatic nightmares, which are triggered by trauma and lead to recurrent emotional dreaming about the trauma, idiopathic nightmares are hypothesized to originate in early adverse experiences that lead in later life to the expression of early memories and emotions in dream content. Accordingly, the objectives of this paper are to (1 review existing literature on sleep, dreaming and nightmares in relation to early adverse experiences, drawing upon both empirical studies of dreaming and nightmares and books and chapters by recognized nightmare experts and (2 propose a new approach to explaining nightmares that is based upon the Stress Acceleration Hypothesis of mental illness. The latter stipulates that susceptibility to mental illness is increased by adversity occurring during a developmentally sensitive window for emotional maturation—the infantile amnesia period—that ends around age 3½. Early adversity accelerates the neural and behavioral maturation of emotional systems governing the expression, learning, and extinction of fear memories and may afford short-term adaptive value. But it also engenders long-term dysfunctional consequences including an increased risk for nightmares. Two mechanisms are proposed: (1 disruption of infantile amnesia allows normally forgotten early childhood memories to influence later emotions, cognitions and behavior, including the common expression of threats in nightmares; (2 alterations of normal emotion regulation processes of both waking and sleep lead to increased fear sensitivity and less effective fear extinction. These changes influence an affect network previously hypothesized to regulate fear extinction during REM sleep, disruption of which leads to
A hypothesis to explain childhood cancers near nuclear power plants
International Nuclear Information System (INIS)
Fairlie, Ian
2014-01-01
Over 60 epidemiological studies world-wide have examined cancer incidences in children near nuclear power plants (NPPs): most of them indicate leukemia increases. These include the 2008 KiKK study commissioned by the German Government which found relative risks (RR) of 1.6 in total cancers and 2.2 in leukemias among infants living within 5 km of all German NPPs. The KiKK study has retriggered the debate as to the cause(s) of these increased cancers. A suggested hypothesis is that the increased cancers arise from radiation exposures to pregnant women near NPPs. However any theory has to account for the >10,000 fold discrepancy between official dose estimates from NPP emissions and observed increased risks. An explanation may be that doses from spikes in NPP radionuclide emissions are significantly larger than those estimated by official models which are diluted through the use of annual averages. In addition, risks to embryos/fetuses are greater than those to adults and haematopoietic tissues appear more radiosensitive in embryos/fetuses than in newborn babies. The product of possible increased doses and possible increased risks per dose may provide an explanation. - Highlights: • Over 60 studies worldwide on increased cancers near nuclear power plants (NPPs). • German government KiKK study provides very strong evidence. • Hypothesis proposes cancers arise in pregnant women near NPPs. • Nuclide spikes during refuelling could result in increased exposures. • Explanation offered for discrepancy between small dose estimates and large risks
Harmony as Ideology: Questioning the Diversity-Stability Hypothesis.
Nikisianis, Nikos; Stamou, Georgios P
2016-03-01
The representation of a complex but stable, self-regulated and, finally, harmonious nature penetrates the whole history of Ecology, thus contradicting the core of the Darwinian evolution. Originated in the pre-Darwinian Natural History, this representation defined theoretically the various schools of early ecology and, in the context of the cybernetic synthesis of the 1950s, it assumed a typical mathematical form on account of α positive correlation between species diversity and community stability. After 1960, these two aforementioned concepts and their positive correlation were proposed as environmental management tools, in the face of the ecological crisis arising at the time. In the early 1970s, and particularly after May's evolutionary arguments, the consensus around this positive correlation collapsed for a while, only to be promptly restored for the purpose of attaching an ecological value on biodiversity. In this paper, we explore the history of the diversity-stability hypothesis and we review the successive terms that have been used to express community stability. We argue that this hypothesis has been motivated by the nodal ideological presuppositions of order and harmony and that the scientific developments in this field largely correspond to external social pressures. We conclude that the conflict about the diversity-stability relationship is in fact an ideological debate, referring mostly to the way we see nature and society rather than to an autonomous scientific question. From this point of view, we may understand why Ecology's concepts and perceptions may decline and return again and again, forming a pluralistic scientific history.
Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops
Sharma, Vikrant
2017-01-01
The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.
Analogical reasoning and aging: the processing speed and inhibition hypothesis.
Bugaiska, Aurélia; Thibaut, Jean-Pierre
2015-01-01
This study was designed to investigate the effect of aging on analogical reasoning by manipulating the strength of semantic association (LowAssoc or HighAssoc) and the number of distracters' semantic analogies of the A:B::C:D type and to determine which factors might be responsible for the age-related differences on analogical reasoning by testing two different theoretical frameworks: the inhibition hypothesis and the speed mediation hypothesis. We compared young adults and two groups of aging people (old and old-old) with word analogies of the A:B::C:D format. Results indicate an age-related effect on analogical reasoning, this effect being greatest with LowAssoc analogies. It was not associated with the presence of semantic distractors. Moreover, the results show that the variance part of the analogy task due to age was mainly explained by processing speed (rather than by inhibition) in the case of old participants and by both processing speed and inhibition in the old-old group. These results are discussed in relation to current models of aging and their interaction with the processes involved in analogical reasoning.
Intraclutch variation in avian eggshell pigmentation: the anaemia hypothesis.
De Coster, Greet; De Neve, Liesbeth; Lens, Luc
2012-10-01
Many passerine species lay eggs that are speckled with dark protoporphyrin pigmentation. Because protoporphyrin is mainly derived from the blood, we here formulate and test a new hypothesis that links an increase in anaemia along the laying sequence to within-clutch variation in egg pigmentation. More intense pigmentation is expected if pigments accumulate during enhanced red blood cell production in response to anaemia. Reduced pigmentation is expected if pigments are derived from the degradation of red blood cells that circulate in smaller numbers due to blood loss. To test this hypothesis, we manipulated anaemia in great tit (Parus major) females by infesting the nests with hen fleas (Ceratophyllus gallinae) prior to egg laying. Polychromatophil (i.e., immature red blood cells) percentage, as a measure of blood cell production, was positively correlated with parasite load confirming that female great tits experienced stronger anaemia when infested with haematophagous parasites during egg laying. We found a positive relationship between spot darkness and laying order that weakened under high parasite load. This result suggests that anaemia in females due to blood-sucking parasites led to diminished protoporphyrin from disintegrated red blood cells and hence a decreased deposition of protoporphyrin. However, the overall increase in pigment darkness along the laying sequence suggests that pigments also accumulate by enhanced red blood cell production caused by anaemia due to egg production itself.
The Matter-Gravity Entanglement Hypothesis
Kay, Bernard S.
2018-03-01
I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.
The Matter-Gravity Entanglement Hypothesis
Kay, Bernard S.
2018-05-01
I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.
Persistent Confusions about Hypothesis Testing in the Social Sciences
Directory of Open Access Journals (Sweden)
Christopher Thron
2015-05-01
Full Text Available This paper analyzes common confusions involving basic concepts in statistical hypothesis testing. One-third of the social science statistics textbooks examined in the study contained false statements about significance level and/or p-value. We infer that a large proportion of social scientists are being miseducated about these concepts. We analyze the causes of these persistent misunderstandings, and conclude that the conventional terminology is prone to abuse because it does not clearly represent the conditional nature of probabilities and events involved. We argue that modifications in terminology, as well as the explicit introduction of conditional probability concepts and notation into the statistics curriculum in the social sciences, are necessary to prevent the persistence of these errors.
Brain Evolution and Human Neuropsychology: The Inferential Brain Hypothesis
Koscik, Timothy R.; Tranel, Daniel
2013-01-01
Collaboration between human neuropsychology and comparative neuroscience has generated invaluable contributions to our understanding of human brain evolution and function. Further cross-talk between these disciplines has the potential to continue to revolutionize these fields. Modern neuroimaging methods could be applied in a comparative context, yielding exciting new data with the potential of providing insight into brain evolution. Conversely, incorporating an evolutionary base into the theoretical perspectives from which we approach human neuropsychology could lead to novel hypotheses and testable predictions. In the spirit of these objectives, we present here a new theoretical proposal, the Inferential Brain Hypothesis, whereby the human brain is thought to be characterized by a shift from perceptual processing to inferential computation, particularly within the social realm. This shift is believed to be a driving force for the evolution of the large human cortex. PMID:22459075
Testing the hypothesis that treatment can eliminate HIV
DEFF Research Database (Denmark)
Okano, Justin T; Robbins, Danielle; Palk, Laurence
2016-01-01
BACKGROUND: Worldwide, approximately 35 million individuals are infected with HIV; about 25 million of these live in sub-Saharan Africa. WHO proposes using treatment as prevention (TasP) to eliminate HIV. Treatment suppresses viral load, decreasing the probability an individual transmits HIV....... The elimination threshold is one new HIV infection per 1000 individuals. Here, we test the hypothesis that TasP can substantially reduce epidemics and eliminate HIV. We estimate the impact of TasP, between 1996 and 2013, on the Danish HIV epidemic in men who have sex with men (MSM), an epidemic UNAIDS has...... identified as a priority for elimination. METHODS: We use a CD4-staged Bayesian back-calculation approach to estimate incidence, and the hidden epidemic (the number of HIV-infected undiagnosed MSM). To develop the back-calculation model, we use data from an ongoing nationwide population-based study...
Inflammation and the Two-Hit Hypothesis of Schizophrenia
Feigenson, Keith A.; Kusnecov, Alex W.; Silverstein, Steven M.
2014-01-01
The high societal and individual cost of schizophrenia necessitates finding better, more effective treatment, diagnosis, and prevention strategies. One of the obstacles in this endeavor is the diverse set of etiologies that comprises schizophrenia. A substantial body of evidence has grown over the last few decades to suggest that schizophrenia is a heterogeneous syndrome with overlapping symptoms and etiologies. At the same time, an increasing number of clinical, epidemiological, and experimental studies have shown links between schizophrenia and inflammatory conditions. In this review, we analyze the literature on inflammation and schizophrenia, with a particular focus on comorbidity, biomarkers, and environmental insults. We then identify several mechanisms by which inflammation could influence the development of schizophrenia via the two-hit hypothesis. Lastly, we note the relevance of these findings to clinical applications in the diagnosis, prevention, and treatment of schizophrenia. PMID:24247023
Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment
Directory of Open Access Journals (Sweden)
Andrew V. Frane
2015-10-01
Full Text Available Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are inherently unnecessary if the tests were “planned” (i.e., if the hypotheses were specified before the study began. This longstanding misconception continues to be perpetuated in textbooks and continues to be cited in journal articles to justify disregard for Type I error inflation. I critically evaluate this myth and examine its rationales and variations. To emphasize the myth’s prevalence and relevance in current research practice, I provide examples from popular textbooks and from recent literature. I also make recommendations for improving research practice and pedagogy regarding this problem and regarding multiple testing in general.
[New hypothesis on the replication of centrioles and basal bodies].
Mignot, J P
1996-12-01
Certain morphological data, obtained in studies of the ultrastructure of centrioles and basal bodies in cells of metazoa and protists, lead us to think that the cartwheel represents of the most appropriate organization for a self-reproducing and transmissible centriolar organizer. Centrioles and basal bodies might then not be simply the centres of replication of those organizers, but also reservoirs containing several superposed centriolar organizers, which are released depending on the requirements of the cell. As an isolated cartwheel is extremely unlikely to be detected, either in conventional electron microscopy or in immunocytochemistry, it is thus the reservoir which has so far been under consideration. Such a hypothesis would permit the explanation that biogenesis de novo and biogenesis in proximity to preexisting organelles may differ only in terms of the number of morphogenetic units involved.
DEFF Research Database (Denmark)
Jones, Allan; Sommerlund, Bo
2007-01-01
The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...
Marginal contrasts and the Contrastivist Hypothesis
Directory of Open Access Journals (Sweden)
Daniel Currie Hall
2016-12-01
Full Text Available The Contrastivist Hypothesis (CH; Hall 2007; Dresher 2009 holds that the only features that can be phonologically active in any language are those that serve to distinguish phonemes, which presupposes that phonemic status is categorical. Many researchers, however, demonstrate the existence of gradient relations. For instance, Hall (2009 quantifies these using the information-theoretic measure of entropy (unpredictability of distribution and shows that a pair of sounds may have an entropy between 0 (totally predictable and 1 (totally unpredictable. We argue that the existence of such intermediate degrees of contrastiveness does not make the CH untenable, but rather offers insight into contrastive hierarchies. The existence of a continuum does not preclude categorical distinctions: a categorical line can be drawn between zero entropy (entirely predictable, and thus by the CH phonologically inactive and non-zero entropy (at least partially contrastive, and thus potentially phonologically active. But this does not mean that intermediate degrees of surface contrastiveness are entirely irrelevant to the CH; rather, we argue, they can shed light on how deeply ingrained a phonemic distinction is in the phonological system. As an example, we provide a case study from Pulaar [ATR] harmony, which has previously been claimed to be problematic for the CH.
The Stem Cell Hypothesis of Aging
Directory of Open Access Journals (Sweden)
Anna Meiliana
2010-04-01
Full Text Available BACKGROUND: There is probably no single way to age. Indeed, so far there is no single accepted explanation or mechanisms of aging (although more than 300 theories have been proposed. There is an overall decline in tissue regenerative potential with age, and the question arises as to whether this is due to the intrinsic aging of stem cells or rather to the impairment of stem cell function in the aged tissue environment. CONTENT: Recent data suggest that we age, in part, because our self-renewing stem cells grow old as a result of heritable intrinsic events, such as DNA damage, as well as extrinsic forces, such as changes in their supporting niches. Mechanisms that suppress the development of cancer, such as senescence and apoptosis, which rely on telomere shortening and the activities of p53 and p16INK4a may also induce an unwanted consequence: a decline in the replicative function of certain stem cells types with advancing age. This decrease regenerative capacity appears to pointing to the stem cell hypothesis of aging. SUMMARY: Recent evidence suggested that we grow old partly because of our stem cells grow old as a result of mechanisms that suppress the development of cancer over a lifetime. We believe that a further, more precise mechanistic understanding of this process will be required before this knowledge can be translated into human anti-aging therapies. KEYWORDS: stem cells, senescence, telomere, DNA damage, epigenetic, aging.
Confabulation: Developing the 'emotion dysregulation' hypothesis.
Turnbull, Oliver H; Salas, Christian E
2017-02-01
Confabulations offer unique opportunities for establishing the neurobiological basis of delusional thinking. As regards causal factors, a review of the confabulation literature suggests that neither amnesia nor executive impairment can be the sole (or perhaps even the primary) cause of all delusional beliefs - though they may act in concert with other factors. A key perspective in the modern literature is that many delusions have an emotionally positive or 'wishful' element, that may serve to modulate or manage emotional experience. Some authors have referred to this perspective as the 'emotion dysregulation' hypothesis. In this article we review the theoretical underpinnings of this approach, and develop the idea by suggesting that the positive aspects of confabulatory states may have a role in perpetuating the imbalance between cognitive control and emotion. We draw on existing evidence from fields outside neuropsychology, to argue for three main causal factors: that positive emotions are related to more global or schematic forms of cognitive processing; that positive emotions influence the accuracy of memory recollection; and that positive emotions make people more susceptible to false memories. These findings suggest that the emotions that we want to feel (or do not want to feel) can influence the way we reconstruct past experiences and generate a sense of self - a proposition that bears on a unified theory of delusional belief states. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Environmental Kuznets Curve Hypothesis. A Survey
International Nuclear Information System (INIS)
Dinda, Soumyananda
2004-01-01
The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (1) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (2) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique
DAMPs, ageing, and cancer: The 'DAMP Hypothesis'.
Huang, Jin; Xie, Yangchun; Sun, Xiaofang; Zeh, Herbert J; Kang, Rui; Lotze, Michael T; Tang, Daolin
2015-11-01
Ageing is a complex and multifactorial process characterized by the accumulation of many forms of damage at the molecular, cellular, and tissue level with advancing age. Ageing increases the risk of the onset of chronic inflammation-associated diseases such as cancer, diabetes, stroke, and neurodegenerative disease. In particular, ageing and cancer share some common origins and hallmarks such as genomic instability, epigenetic alteration, aberrant telomeres, inflammation and immune injury, reprogrammed metabolism, and degradation system impairment (including within the ubiquitin-proteasome system and the autophagic machinery). Recent advances indicate that damage-associated molecular pattern molecules (DAMPs) such as high mobility group box 1, histones, S100, and heat shock proteins play location-dependent roles inside and outside the cell. These provide interaction platforms at molecular levels linked to common hallmarks of ageing and cancer. They can act as inducers, sensors, and mediators of stress through individual plasma membrane receptors, intracellular recognition receptors (e.g., advanced glycosylation end product-specific receptors, AIM2-like receptors, RIG-I-like receptors, and NOD1-like receptors, and toll-like receptors), or following endocytic uptake. Thus, the DAMP Hypothesis is novel and complements other theories that explain the features of ageing. DAMPs represent ideal biomarkers of ageing and provide an attractive target for interventions in ageing and age-associated diseases. Copyright © 2014 Elsevier B.V. All rights reserved.
Identity of Particles and Continuum Hypothesis
Berezin, Alexander A.
2001-04-01
Why all electrons are the same? Unlike other objects, particles and atoms (same isotopes) are forbidden to have individuality or personal history (or reveal their hidden variables, even if they do have them). Or at least, what we commonly call physics so far was unable to disprove particle's sameness (Berezin and Nakhmanson, Physics Essays, 1990). Consider two opposing hypotheses: (A) particles are indeed absolutely same, or (B) they do have individuality, but it is beyond our capacity to demonstrate. This dilemma sounds akin to undecidability of Continuum Hypothesis of existence (or not) of intermediate cardinalities between integers and reals (P.Cohen). Both yes and no of it are true. Thus, (alleged) sameness of electrons and atoms may be a physical translation (embodiment) of this fundamental Goedelian undecidability. Experiments unlikely to help: even if we find that all electrons are same within 30 decimal digits, could their masses (or charges) still differ in100-th digit? Within (B) personalized informationally rich (infinitely rich?) digital tails (starting at, say, 100-th decimal) may carry individual record of each particle history. Within (A) parameters (m, q) are indeed exactly same in all digits and their sameness is based on some inherent (meta)physical principle akin to Platonism or Eddington-type numerology.
Environmental Kuznets Curve Hypothesis. A Survey
Energy Technology Data Exchange (ETDEWEB)
Dinda, Soumyananda [Economic Research Unit, Indian Statistical Institute, 203, B.T. Road, Kolkata-108 (India)
2004-08-01
The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (1) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (2) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique.
Dissimilarities of reduced density matrices and eigenstate thermalization hypothesis
He, Song; Lin, Feng-Li; Zhang, Jia-ju
2017-12-01
We calculate various quantities that characterize the dissimilarity of reduced density matrices for a short interval of length ℓ in a two-dimensional (2D) large central charge conformal field theory (CFT). These quantities include the Rényi entropy, entanglement entropy, relative entropy, Jensen-Shannon divergence, as well as the Schatten 2-norm and 4-norm. We adopt the method of operator product expansion of twist operators, and calculate the short interval expansion of these quantities up to order of ℓ9 for the contributions from the vacuum conformal family. The formal forms of these dissimilarity measures and the derived Fisher information metric from contributions of general operators are also given. As an application of the results, we use these dissimilarity measures to compare the excited and thermal states, and examine the eigenstate thermalization hypothesis (ETH) by showing how they behave in high temperature limit. This would help to understand how ETH in 2D CFT can be defined more precisely. We discuss the possibility that all the dissimilarity measures considered here vanish when comparing the reduced density matrices of an excited state and a generalized Gibbs ensemble thermal state. We also discuss ETH for a microcanonical ensemble thermal state in a 2D large central charge CFT, and find that it is approximately satisfied for a small subsystem and violated for a large subsystem.
A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING
Temel, Tugrul T.
2001-01-01
This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.
[Psychodynamic hypothesis about suicidality in elderly men].
Lindner, Reinhard
2010-08-01
Old men are overrepresented in the whole of all suicides. In contrast, only very few elderly men find their way to specialised treatment facilities. Elderly accept psychotherapy more rarely than younger persons. Therefore presentations on the psychodynamics of suicidality in old men are rare and mostly casuistical. By means of a stepwise reconstructable qualitative case comparison of five randomly chosen elderly suicidal men with ideal types of suicidal (younger) men concerning biography, suicidal symptoms and transference, psychodynamic hypothesis of suicidality in elderly men are developed. All patients came into psychotherapy in a specialised academic out-patient clinic for psychodynamic treatment of acute and chronic suicidality. The five elderly suicidal men predominantly were living in long-term, conflictuous sexual relationships and also had ambivalent relationships to their children. Suicidality in old age refers to lifelong existing intrapsychic conflicts, concerning (male) identity, self-esteem and a core conflict between fusion and separation wishes. The body gets a central role in suicidal experiences, being a defensive instance modified by age and/or physical illness, which brings up to consciousness aggressive and envious impulses, but also feelings of emptiness and insecurity, which have to be warded off again by projection into the body. In transference relationships there are on the one hand the regular transference, on the other hand an age specific turned around transference, with their counter transference reactions. The chosen methodological approach serves the systematic finding of hypotheses with a higher degree in evidence than hypotheses generated from single case studies. Georg Thieme Verlag KG Stuttgart - New York.
Atopic dermatitis and the hygiene hypothesis revisited.
Flohr, Carsten; Yeo, Lindsey
2011-01-01
We published a systematic review on atopic dermatitis (AD) and the hygiene hypothesis in 2005. Since then, the body of literature has grown significantly. We therefore repeated our systematic review to examine the evidence from population-based studies for an association between AD risk and specific infections, childhood immunizations, the use of antibiotics and environmental exposures that lead to a change in microbial burden. Medline was searched from 1966 until June 2010 to identify relevant studies. We found an additional 49 papers suitable for inclusion. There is evidence to support an inverse relationship between AD and endotoxin, early day care, farm animal and dog exposure in early life. Cat exposure in the presence of skin barrier impairment is positively associated with AD. Helminth infection at least partially protects against AD. This is not the case for viral and bacterial infections, but consumption of unpasteurized farm milk seems protective. Routine childhood vaccinations have no effect on AD risk. The positive association between viral infections and AD found in some studies appears confounded by antibiotic prescription, which has been consistently associated with an increase in AD risk. There is convincing evidence for an inverse relationship between helminth infections and AD but no other pathogens. The protective effect seen with early day care, endotoxin, unpasteurized farm milk and animal exposure is likely to be due to a general increase in exposure to non-pathogenic microbes. This would also explain the risk increase associated with the use of broad-spectrum antibiotics. Future studies should assess skin barrier gene mutation carriage and phenotypic skin barrier impairment, as gene-environment interactions are likely to impact on AD risk. Copyright © 041_ S. Karger AG, Basel.
Random number generation and creativity.
Bains, William
2008-01-01
A previous paper suggested that humans can generate genuinely random numbers. I tested this hypothesis by repeating the experiment with a larger number of highly numerate subjects, asking them to call out a sequence of digits selected from 0 through 9. The resulting sequences were substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number, in line with previous literature. However, the previous literature suggests that humans generate random numbers with substantial conscious effort, and distractions which reduce that effort reduce the randomness of the numbers. I reduced my subjects' concentration by asking them to call out in another language, and with alcohol - neither affected the randomness of their responses. This suggests that the ability to generate random numbers is a 'basic' function of the human mind, even if those numbers are not mathematically 'random'. I hypothesise that there is a 'creativity' mechanism, while not truly random, provides novelty as part of the mind's defence against closed programming loops, and that testing for the effects seen here in people more or less familiar with numbers or with spontaneous creativity could identify more features of this process. It is possible that training to perform better at simple random generation tasks could help to increase creativity, through training people to reduce the conscious mind's suppression of the 'spontaneous', creative response to new questions.
Late Pleistocene Megafaunal Extinction Consistent With YDB Impact Hypothesis at Younger Dryas Onset
Kennett, J. P.; Kennett, D. J.
2008-12-01
At least 35 mammal and 19 bird genera became extinct across North America near the end of the Pleistocene. Modern increases in stratigraphic and dating resolution suggest that this extinction occurred relatively rapidly near 12.9 ka (11 radiocarbon kyrs). Within the context of a long-standing debate about its cause, Firestone et al., (2007) proposed that this extinction resulted from an extraterrestrial (ET) impact over North America at 12.9 ka. This hypothesis predicts that the extinction of most of these animals should have occurred abruptly at 12.9 ka. To test this hypothesis, we have critically examined radiocarbon ages and the extinction stratigraphy of these taxa. From a large data pool, we selected only radiocarbon dates with low error margins with a preference for directly dated biological materials (e.g., bone, dung, etc.) and modern chemical purification techniques. A relatively small number of acceptable dates indicate that at least 16 animal genera and several other species became extinct close to 12.9 ka. These taxa include the most common animals of the late Pleistocene such as horses, camels, and mammoths. Also, the remains of extinct taxa are reportedly found up to, but not above, the base of a widely distributed carbon-rich layer called the black mat. This stratum forms an abrupt, major biostratigraphic boundary at the Younger Dryas onset (12.9 ka), which also contains multiple ET markers comprising the impact layer (the YDB). Surviving animal populations were abruptly reduced at the YDB (e.g., Bison), with major range restrictions and apparent evolutionary bottlenecks. The abruptness of this major extinction is inconsistent with the hypotheses of human overkill and climatic change. We argue that extinction ages older than 12.9 ka for many less common species result from the Signor-Lipps effect, but the impact hypothesis predicts that as new dates are acquired, they will approach ever closer to 12.9 ka. The megafaunal extinction is strongly
Unaware Memory in Hypothesis Generation Tasks
1986-12-01
Human Learning and Memory, 2, 554-565. Lockhart , R. S., Craik , F. I. M., & Jacoby, L. L. (1976). Depth of processing , recognition and recall: Some...hypotheses were primed by study items and that priming was unrelated to recognition performance. Level of processing of the study items influenced...result is consistent with a large body of research that relates processing depth to recognition and other forms of deliberate remembering (e.g., Craik
A gentilion hypothesis for quark colours
International Nuclear Information System (INIS)
Cattani, M.S.D.; Fernandes, N.C.
1984-01-01
Extendind the Noether's theorem it is possible to identify the colour quantum numbers with the eigenvalue of a S sup((3)) algebra invariant. In the gentilion approximation, the composition of the coloured S sup((3)) with the symmetric quark model seems to constitute in an exact symmetry of the nature. Some general properties related with the observationality in Quantum Mechanics are also approached and the Gentile statistical universality is asserted. (L.C.) [pt
Introducing the refined gravity hypothesis of extreme sexual size dimorphism
Directory of Open Access Journals (Sweden)
Corcobado Guadalupe
2010-08-01
Full Text Available Abstract Background Explanations for the evolution of female-biased, extreme Sexual Size Dimorphism (SSD, which has puzzled researchers since Darwin, are still controversial. Here we propose an extension of the Gravity Hypothesis (i.e., the GH, which postulates a climbing advantage for small males that in conjunction with the fecundity hypothesis appears to have the most general power to explain the evolution of SSD in spiders so far. In this "Bridging GH" we propose that bridging locomotion (i.e., walking upside-down under own-made silk bridges may be behind the evolution of extreme SSD. A biomechanical model shows that there is a physical constraint for large spiders to bridge. This should lead to a trade-off between other traits and dispersal in which bridging would favor smaller sizes and other selective forces (e.g. fecundity selection in females would favor larger sizes. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. We predicted that both large males and females would show a lower propensity to bridge, and that SSD would be negatively correlated with sexual dimorphism in bridging propensity. To test these hypotheses we experimentally induced bridging in males and females of 13 species of spiders belonging to the two clades in which bridging locomotion has evolved independently and in which most of the cases of extreme SSD in spiders are found. Results We found that 1 as the degree of SSD increased and females became larger, females tended to bridge less relative to males, and that 2 smaller males and females show a higher propensity to bridge. Conclusions Physical constraints make bridging inefficient for large spiders. Thus, in species where bridging is a very common mode of locomotion, small males, by being more efficient at bridging, will be competitively superior and enjoy more mating opportunities. This "Bridging GH" helps to solve the controversial question of
Lynn White Jr. and the greening-of-religion hypothesis.
Taylor, Bron; Van Wieren, Gretel; Zaleha, Bernard Daley
2016-10-01
Lynn White Jr.'s "The Historical Roots of Our Ecologic Crisis," which was published in Science in 1967, has played a critical role in precipitating interdisciplinary environmental studies. Although White advances a multifaceted argument, most respondents focus on his claim that the Judeo-Christian tradition, especially Christianity, has promoted anthropocentric attitudes and environmentally destructive behaviors. Decades later, some scholars argue contrarily that Christianity in particular and the world's predominant religions in general are becoming more environmentally friendly, known as the greening-of-religion hypothesis. To test these claims, we conducted a comprehensive review of over 700 articles-historical, qualitative, and quantitative-that are pertinent to them. Although definitive conclusions are difficult, we identified many themes and dynamics that hinder environmental understanding and mobilization, including conservative theological orientations and beliefs about the role of divine agency in preventing or promoting natural events, whether the religion is an Abrahamic tradition or originated in Asia. On balance, we found the thrust of White's thesis is supported, whereas the greening-of-religion hypothesis is not. We also found that indigenous traditions often foster proenvironmental perceptions. This finding suggests that indigenous traditions may be more likely to be proenvironmental than other religious systems and that some nature-based cosmologies and value systems function similarly. Although we conclude White's thesis and subsequent claims are largely born out, additional research is needed to better understand under what circumstances and communication strategies religious or other individuals and groups may be more effectively mobilized to respond to contemporary environmental challenges. © 2016 Society for Conservation Biology.
Nonthermal effects of therapeutic ultrasound: the frequency resonance hypothesis.
Johns, Lennart D
2002-07-01
To present the frequency resonance hypothesis, a possible mechanical mechanism by which treatment with non-thermal levels of ultrasound stimulates therapeutic effects. The review encompasses a 4-decade history but focuses on recent reports describing the effects of nonthermal therapeutic levels of ultrasound at the cellular and molecular levels. A search of MEDLINE from 1965 through 2000 using the terms ultrasound and therapeutic ultrasound. The literature provides a number of examples in which exposure of cells to therapeutic ultrasound under nonthermal conditions modified cellular functions. Nonthermal levels of ultrasound are reported to modulate membrane properties, alter cellular proliferation, and produce increases in proteins associated with inflammation and injury repair. Combined, these data suggest that nonthermal effects of therapeutic ultrasound can modify the inflammatory response. The concept of the absorption of ultrasonic energy by enzymatic proteins leading to changes in the enzymes activity is not novel. However, recent reports demonstrating that ultrasound affects enzyme activity and possibly gene regulation provide sufficient data to present a probable molecular mechanism of ultrasound's nonthermal therapeutic action. The frequency resonance hypothesis describes 2 possible biological mechanisms that may alter protein function as a result of the absorption of ultrasonic energy. First, absorption of mechanical energy by a protein may produce a transient conformational shift (modifying the 3-dimensional structure) and alter the protein's functional activity. Second, the resonance or shearing properties of the wave (or both) may dissociate a multimolecular complex, thereby disrupting the complex's function. This review focuses on recent studies that have reported cellular and molecular effects of therapeutic ultrasound and presents a mechanical mechanism that may lead to a better understanding of how the nonthermal effects of ultrasound may be
Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification
Directory of Open Access Journals (Sweden)
Yidong Tang
2016-01-01
Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.
Woods, H. Arthur; Moran, Amy L.; Arango, Claudia P.; Mullen, Lindy; Shields, Chris
2008-01-01
Compared to temperate and tropical relatives, some high-latitude marine species are large-bodied, a phenomenon known as polar gigantism. A leading hypothesis on the physiological basis of gigantism posits that, in polar water, high oxygen availability coupled to low metabolic rates relieves constraints on oxygen transport and allows the evolution of large body size. Here, we test the oxygen hypothesis using Antarctic pycnogonids, which have been evolving in very cold conditions (−1.8–0°C) for several million years and contain spectacular examples of gigantism. Pycnogonids from 12 species, spanning three orders of magnitude in body mass, were collected from McMurdo Sound, Antarctica. Individual sea spiders were forced into activity and their performance was measured at different experimental levels of dissolved oxygen (DO). The oxygen hypothesis predicts that, all else being equal, large pycnogonids should perform disproportionately poorly in hypoxia, an outcome that would appear as a statistically significant interaction between body size and oxygen level. In fact, although we found large effects of DO on performance, and substantial interspecific variability in oxygen sensitivity, there was no evidence for size×DO interactions. These data do not support the oxygen hypothesis of Antarctic pycnogonid gigantism and suggest that explanations must be sought in other ecological or evolutionary processes. PMID:19129117
Praise the Bridge that Carries You Over: Testing the Flattery Citation Hypothesis
DEFF Research Database (Denmark)
Frandsen, Tove Faber; Nicolaisen, Jeppe
2011-01-01
analysis of the editorial board members entering American Economic Review from 1984 to 2004 using a citation window of 11 years. In order to test the flattery citation hypothesis further we have conducted a study applying the difference-in-difference estimator. We analyse the number of times the editors...
Testing the fire-sale FDI hypothesis for the European financial crisis
Weitzel, G.U.; Kling, G.; Gerritsen, D.
2014-01-01
Using a panel of corporate transactions in 27 EU countries from 1999 to 2012, we investigate the impact of the financial crisis on the market for corporate assets. In particular, we test the ‘fire-sale FDI’ hypothesis by analyzing the number of cross-border transactions, the price of corporate
Testing the Fire-Sale FDI Hypothesis for the European Financial Crisis
Kling, G.; Gerritsen, Dirk; Weitzel, Gustav Utz
2014-01-01
Using a panel of corporate transactions in 27 EU countries from 1999 to 2012, we investigate the impact of the financial crisis on the market for corporate assets. In particular, we test the ‘fire-sale FDI’ hypothesis by analyzing the number of cross-border transactions, the price of corporate
The Need for Nuance in the Null Hypothesis Significance Testing Debate
Häggström, Olle
2017-01-01
Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…
Earthquake number forecasts testing
Kagan, Yan Y.
2017-10-01
We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness
Directory of Open Access Journals (Sweden)
Chris Cheadle
2007-01-01
Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.
Is Dose Deformation–Invariance Hypothesis Verified in Prostate IGRT?
Energy Technology Data Exchange (ETDEWEB)
Simon, Antoine, E-mail: antoine.simon@univ-rennes1.fr [INSERM, U1099, 35000 Rennes (France); Laboratoire Traitement du Signal et de l' Image, Université de Rennes 1, 35000 Rennes (France); Le Maitre, Amandine; Nassef, Mohamed; Rigaud, Bastien [INSERM, U1099, 35000 Rennes (France); Laboratoire Traitement du Signal et de l' Image, Université de Rennes 1, 35000 Rennes (France); Castelli, Joël [INSERM, U1099, 35000 Rennes (France); Laboratoire Traitement du Signal et de l' Image, Université de Rennes 1, 35000 Rennes (France); Department of Radiotherapy, Centre Eugène Marquis, 35000 Rennes (France); Acosta, Oscar; Haigron, Pascal [INSERM, U1099, 35000 Rennes (France); Laboratoire Traitement du Signal et de l' Image, Université de Rennes 1, 35000 Rennes (France); Lafond, Caroline; Crevoisier, Renaud de [INSERM, U1099, 35000 Rennes (France); Laboratoire Traitement du Signal et de l' Image, Université de Rennes 1, 35000 Rennes (France); Department of Radiotherapy, Centre Eugène Marquis, 35000 Rennes (France)
2017-03-15
Purpose: To assess dose uncertainties resulting from the dose deformation–invariance hypothesis in prostate cone beam computed tomography (CT)–based image guided radiation therapy (IGRT), namely to evaluate whether rigidly propagated planned dose distribution enables good estimation of fraction dose distributions. Methods and Materials: Twenty patients underwent a CT scan for planning intensity modulated radiation therapy–IGRT delivering 80 Gy to the prostate, followed by weekly CT scans. Two methods were used to obtain the dose distributions on the weekly CT scans: (1) recalculating the dose using the original treatment plan; and (2) rigidly propagating the planned dose distribution. The cumulative doses were then estimated in the organs at risk for each dose distribution by deformable image registration. The differences between recalculated and propagated doses were finally calculated for the fraction and the cumulative dose distributions, by use of per-voxel and dose-volume histogram (DVH) metrics. Results: For the fraction dose, the mean per-voxel absolute dose difference was <1 Gy for 98% and 95% of the fractions for the rectum and bladder, respectively. The maximum dose difference within 1 voxel reached, however, 7.4 Gy in the bladder and 8.0 Gy in the rectum. The mean dose differences were correlated with gas volume for the rectum and patient external contour variations for the bladder. The mean absolute differences for the considered volume receiving greater than or equal to dose x (V{sub x}) of the DVH were between 0.37% and 0.70% for the rectum and between 0.53% and 1.22% for the bladder. For the cumulative dose, the mean differences in the DVH were between 0.23% and 1.11% for the rectum and between 0.55% and 1.66% for the bladder. The largest dose difference was 6.86%, for bladder V{sub 80Gy}. The mean dose differences were <1.1 Gy for the rectum and <1 Gy for the bladder. Conclusions: The deformation–invariance hypothesis was
International Nuclear Information System (INIS)
Malcolm J. Andrews
2006-01-01
This project had two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. This report describes work done in the last twelve (12) months of the project, and also contains a summary of the complete work done over the three (3) life of the project. As of April 1, 2006, the air/helium facility (Task 1) is now complete and extensive testing and validation of diagnostics has been performed. Initial condition studies (Task 2) is also complete. Detailed experiments with air/helium with Atwood numbers up to 0.1 have been completed, and Atwood numbers of 0.25. Within the last three (3) months we have been able to successfully run the facility at Atwood numbers of 0.5. The progress matches the project plan, as does the budget. We have finished the initial condition studies using the water channel, and this work has been accepted for publication on the Journal of Fluid Mechanics (the top fluid mechanics journal). Mr. Nick Mueschke and Mr. Wayne Kraft are continuing with their studies to obtain PhDs in the same field, and will also continue their collaboration visits to LANL and LLNL. Over its three (3) year life the project has supported two(2) Ph.D.'s and three (3) MS's, and produced nine (9) international journal publications, twenty four (24) conference publications, and numerous other reports. The highlight of the project has been our close collaboration with LLNL (Dr. Oleg Schilling) and LANL (Drs. Dimonte, Ristorcelli, Gore, and Harlow)
Alternative analysis: the prime numbers theory and an extension of the real numbers set
Sukhotin A.; Zvyagin M.
2016-01-01
Here we consider the theory of prime numbers at a new methodology. The theory of prime numbers is one of the most ancient mathematical branches. We found an estimate of the all prime numbers sum using the notions of infinite lager numbers and infinitely small numbers, farther we estimated the value of the maximal prime number. We proved that Hardy–Littlewood Hypothesis has the positive decision too. The infinite small numbers define a new methodology of the well–known function o(x) applicatio...
Investigating the Randomness of Numbers
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Number Sense on the Number Line
Woods, Dawn Marie; Ketterlin Geller, Leanne; Basaraba, Deni
2018-01-01
A strong foundation in early number concepts is critical for students' future success in mathematics. Research suggests that visual representations, like a number line, support students' development of number sense by helping them create a mental representation of the order and magnitude of numbers. In addition, explicitly sequencing instruction…
Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi
2011-06-01
This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.
On the validity of Taylor's hypothesis for wall-bounded flows
International Nuclear Information System (INIS)
Piomelli, U.; Balint, J.; Wallace, J.M.
1989-01-01
The results of large eddy simulation (LES) of the Navier--Stokes equations are used to evaluate the validity of Taylor's hypothesis of frozen turbulence, which states that the time derivative of some instantaneous quantity is proportional to its derivative in the streamwise direction, for incompressible plane channel flow. Time and space derivatives in the streamwise direction of the velocity components are, in fact, found to be well correlated. Root-mean-square fluctuations of the terms in Taylor's hypothesis also support the validity of this hypothesis above the buffer layer. The good agreement between LES and experimental results indicates that errors in the evaluation of derivatives in the streamwise direction are due mostly to insufficient resolution
Thermal cover needs of large ungulates: a review of hypothesis tests.
John G. Cook; Larry L. Irwin; Larry D. Bryant; Robert A. Riggs; Jack Ward. Thomas
2004-01-01
A great deal of big game research occurred in western North America during the 1960s through the 1980s, and many advances in our knowledge occurred as a result. Timber harvest increased during this period in many localities, and this trend was often perceived to threaten ungulate populations (Hieb 1976). Thus, it is not surprising that appreciable research in this era...
An endocannabinoid hypothesis of drug reward and drug addiction.
Onaivi, Emmanuel S
2008-10-01
Pharmacologic treatment of drug and alcohol dependency has largely been disappointing, and new therapeutic targets and hypotheses are needed. There is accumulating evidence indicating a central role for the previously unknown but ubiquitous endocannabinoid physiological control system (EPCS) in the regulation of the rewarding effects of abused substances. Thus an endocannabinoid hypothesis of drug reward is postulated. Endocannabinoids mediate retrograde signaling in neuronal tissues and are involved in the regulation of synaptic transmission to suppress neurotransmitter release by the presynaptic cannabinoid receptors (CB-Rs). This powerful modulatory action on synaptic transmission has significant functional implications and interactions with the effects of abused substances. Our data, along with those from other investigators, provide strong new evidence for a role for EPCS modulation in the effects of drugs of abuse, and specifically for involvement of cannabinoid receptors in the neural basis of addiction. Cannabinoids and endocannabinoids appear to be involved in adding to the rewarding effects of addictive substances, including, nicotine, opiates, alcohol, cocaine, and BDZs. The results suggest that the EPCS may be an important natural regulatory mechanism for drug reward and a target for the treatment of addictive disorders.