WorldWideScience

Sample records for kolmogorov similarity hypothesis

  1. Kolmogorov's refined similarity hypotheses for turbulence and general stochastic processes

    International Nuclear Information System (INIS)

    Stolovitzky, G.; Sreenivasan, K.R.

    1994-01-01

    Kolmogorov's refined similarity hypotheses are shown to hold true for a variety of stochastic processes besides high-Reynolds-number turbulent flows, for which they were originally proposed. In particular, just as hypothesized for turbulence, there exists a variable V whose probability density function attains a universal form. Analytical expressions for the probability density function of V are obtained for Brownian motion as well as for the general case of fractional Brownian motion---the latter under some mild assumptions justified a posteriori. The properties of V for the case of antipersistent fractional Brownian motion with the Hurst exponent of 1/3 are similar in many details to those of high-Reynolds-number turbulence in atmospheric boundary layers a few meters above the ground. The one conspicuous difference between turbulence and the antipersistent fractional Brownian motion is that the latter does not possess the required skewness. Broad implications of these results are discussed

  2. Kolmogorov similarity hypotheses for scalar fields: sampling intermittent turbulent mixing in the ocean and galaxy

    International Nuclear Information System (INIS)

    Gibson, C.H.

    1991-01-01

    Kolmogorov's three universal similarity hypotheses are extrapolated to describe scalar fields like temperature mixed by turbulence. The analogous first and second hypotheses for scalars include the effects of Prandtl number and rate-of-strain mixing. Application of velocity and scalar similarity hypotheses to the ocean must take into account the damping of active turbulence by density stratification and the Earth's rotation to form fossil turbulence. By the analogous Kolmogorov third hypothesis for scalars, temperature dissipation rates χ averaged over lengths r > L K should be lognormally distributed with intermittency factors σ 2 that increase with increasing turbulence energy length scales L O as σ ln r 2 approx = μ θ ln(L O /r). Tests of kolmogorovian velocity and scalar universal similarity hypotheses for very large ranges of turbulence length and timescales are provided by data from the ocean and the galactic interstellar medium. These ranges are from 1 to 9 decades in the ocean, and over 12 decades in the interstellar medium. The universal constant for turbulent mixing intermittency μ θ is estimated from oceanic data to be 0.44±0.01, which is remarkably close to estimates for Kolmorgorov's turbulence intermittency constant μ of 0.45±0.05 from galactic as well as atmospheric data. Extreme intermittency complicates the oceanic sampling problem, and may lead to quantitative and qualitative undersampling errors in estimates of mean oceanic dissipation rates and fluxes. Intermittency of turbulence and mixing in the interstellar medium may be a factor in the formation of stars. (author)

  3. Self-similar formation of the Kolmogorov spectrum in the Leith model of turbulence

    International Nuclear Information System (INIS)

    Nazarenko, S V; Grebenev, V N

    2017-01-01

    The last stage of evolution toward the stationary Kolmogorov spectrum of hydrodynamic turbulence is studied using the Leith model [1]. This evolution is shown to manifest itself as a reflection wave in the wavenumber space propagating from the largest toward the smallest wavenumbers, and is described by a self-similar solution of a new (third) kind. This stage follows the previously studied stage of an initial explosive propagation of the spectral front from the smallest to the largest wavenumbers reaching arbitrarily large wavenumbers in a finite time, and which was described by a self-similar solution of the second kind [2–4]. Nonstationary solutions corresponding to ‘warm cascades’ characterised by a thermalised spectrum at large wavenumbers are also obtained. (paper)

  4. HYPOTHESIS TESTING WITH THE SIMILARITY INDEX

    Science.gov (United States)

    Mulltilocus DNA fingerprinting methods have been used extensively to address genetic issues in wildlife populations. Hypotheses concerning population subdivision and differing levels of diversity can be addressed through the use of the similarity index (S), a band-sharing coeffic...

  5. Kolmogorov in perspective

    CERN Document Server

    2006-01-01

    The editorial board for the History of Mathematics series has selected for this volume a series of translations from two Russian publications, Kolmogorov in Remembrance and Mathematics and its Historical Development. This book, Kolmogorov in Perspective, includes articles written by Kolmogorov's students and colleagues and his personal accounts of shared experiences and lifelong mathematical friendships. The articles combine to give an excellent personal and scientific biography of this important mathematician. There is also an extensive bibliography with the complete list of Kolmogorov's works-including the articles written for encyclopedias and newspapers. The book is illustrated with photographs and includes quotations from Kolmogorov's letters and conversations, uniquely reflecting his mathematical tastes and opinions.

  6. Chebyshev splines and Kolmogorov inequalities

    National Research Council Canada - National Science Library

    Bagdasarov, Sergey

    1998-01-01

    .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.1.2 Cases of the complete solution of the Kolmogorov problem... 0.2 Kolmogorov - Landau problem in the Sobolev class W~+l(I) ... 0.2.1 Inequalities...

  7. Topological K-Kolmogorov groups

    International Nuclear Information System (INIS)

    Abd El-Sattar, A. Dabbour.

    1987-07-01

    The idea of the K-groups was used to define K-Kolmogorov homology and cohomology (over pairs of coefficient groups) which are descriptions of certain modifications of the Kolmogorov groups. The present work is devoted to the study of the topological properties of the K-Kolmogorov groups which lie at the root of the group duality based essentially upon Pontrjagin's concept of group multiplication. 14 refs

  8. Can deja vu result from similarity to a prior experience? Support for the similarity hypothesis of deja vu.

    Science.gov (United States)

    Cleary, Anne M; Ryals, Anthony J; Nomi, Jason S

    2009-12-01

    The strange feeling of having been somewhere or done something before--even though there is evidence to the contrary--is called déjà vu. Although déjà vu is beginning to receive attention among scientists (Brown, 2003, 2004), few studies have empirically investigated the phenomenon. We investigated the hypothesis that déjà vu is related to feelings of familiarity and that it can result from similarity between a novel scene and that of a scene experienced in one's past. We used a variation of the recognition-without-recall method of studying familiarity (Cleary, 2004) to examine instances in which participants failed to recall a studied scene in response to a configurally similar novel test scene. In such instances, resemblance to a previously viewed scene increased both feelings of familiarity and of déjà vu. Furthermore, in the absence of recall, resemblance of a novel scene to a previously viewed scene increased the probability of a reported déjà vu state for the novel scene, and feelings of familiarity with a novel scene were directly related to feelings of being in a déjà vu state.

  9. A similarity hypothesis for the two-point correlation tensor in a temporally evolving plane wake

    Science.gov (United States)

    Ewing, D. W.; George, W. K.; Moser, R. D.; Rogers, M. M.

    1995-01-01

    The analysis demonstrated that the governing equations for the two-point velocity correlation tensor in the temporally evolving wake admit similarity solutions, which include the similarity solutions for the single-point moment as a special case. The resulting equations for the similarity solutions include two constants, beta and Re(sub sigma), that are ratios of three characteristic time scales of processes in the flow: a viscous time scale, a time scale characteristic of the spread rate of the flow, and a characteristic time scale of the mean strain rate. The values of these ratios depend on the initial conditions of the flow and are most likely measures of the coherent structures in the initial conditions. The occurrences of these constants in the governing equations for the similarity solutions indicates that these solutions, in general, will only be the same for two flows if these two constants are equal (and hence the coherent structures in the flows are related). The comparisons between the predictions of the similarity hypothesis and the data presented here and elsewhere indicate that the similarity solutions for the two-point correlation tensors provide a good approximation of the measures of those motions that are not significantly affected by the boundary conditions caused by the finite extent of real flows. Thus, the two-point similarity hypothesis provides a useful tool for both numerical and physical experimentalist that can be used to examine how the finite extent of real flows affect the evolution of the different scales of motion in the flow.

  10. Bilateral export trade and income similarity: Does the Linder hypothesis hold for agricultural and food trade?

    OpenAIRE

    Steinbach, Sandro

    2015-01-01

    In this paper we invesƟgate the Linder hypothesis for bilateral export trade in agricultural and food products by uƟlizing the sectoral gravity equaƟon derived in Hallak (2010). Based on a sample of 152 countries, we study the relaƟonship for 737 agricultural and food products at the 6-digit HS code level, using trade data for 1995-2012. We esƟmate the gravity equaƟon year by and year and sector by sector, analyzing the esƟmates of Linder's term for two specificaƟons of the similarity index. W...

  11. The Similarity Hypothesis and New Analytical Support on the Estimation of Horizontal Infiltration into Sand

    International Nuclear Information System (INIS)

    Prevedello, C.L.; Loyola, J.M.T.

    2010-01-01

    A method based on a specific power-law relationship between the hydraulic head and the Boltzmann variable, presented using a similarity hypothesis, was recently generalized to a range of powers to satisfy the Bruce and Klute equation exactly. Here, considerations are presented on the proposed similarity assumption, and new analytical support is given to estimate the water density flux into and inside the soil, based on the concept of sorptivity and on Buckingham-Darcy's law. Results show that the new analytical solution satisfies both theories in the calculation of water density fluxes and is in agreement with experimental results of water infiltrating horizontally into sand. However, the utility of this analysis still needs to be verified for a variety of different textured soils having a diverse range of initial soil water contents.

  12. Hypothesis: the chaos and complexity theory may help our understanding of fibromyalgia and similar maladies.

    Science.gov (United States)

    Martinez-Lavin, Manuel; Infante, Oscar; Lerma, Claudia

    2008-02-01

    Modern clinicians are often frustrated by their inability to understand fibromyalgia and similar maladies since these illnesses cannot be explained by the prevailing linear-reductionist medical paradigm. This article proposes that new concepts derived from the Complexity Theory may help understand the pathogenesis of fibromyalgia, chronic fatigue syndrome, and Gulf War syndrome. This hypothesis is based on the recent recognition of chaos fractals and complex systems in human physiology. These nonlinear dynamics concepts offer a different perspective to the notion of homeostasis and disease. They propose that the essence of disease is dysfunction and not structural damage. Studies using novel nonlinear instruments have shown that fibromyalgia and similar maladies may be caused by the degraded performance of our main complex adaptive system. This dysfunction explains the multifaceted manifestations of these entities. To understand and alleviate the suffering associated with these complex illnesses, a paradigm shift from reductionism to holism based on the Complexity Theory is suggested. This shift perceives health as resilient adaptation and some chronic illnesses as rigid dysfunction.

  13. Kolmogorov-Arnold-Moser Theorem

    Indian Academy of Sciences (India)

    system (not necessarily the 2-body system). Kolmogorov was the first to provide a solution to the above general problem in a theorem formulated in 1954 (see Suggested. Reading). However, he provided only an outline of the proof. The actual proof (with all the details) turned to be quite difficult and was provided by Arnold ...

  14. K-Kolmogorov cohomology groups

    International Nuclear Information System (INIS)

    Abd El-Sattar, A. Dabbour.

    1986-07-01

    In the present work we use the idea of K-groups to give a description of certain modification of the Kolmogorov cohomology groups for the case of a pair (G,G') of discrete coefficient groups. Their induced homomorphisms and coboundary operators are also defined, and then we study the resulting construction from the point of view of Eilenberg-Steenrod axioms. (author)

  15. Kolmogorov's constant and local interactions

    International Nuclear Information System (INIS)

    Kraichnan, R.H.

    1987-01-01

    Suppose that all the wave-vector triad interactions that involve no wavenumber ratio that exceeds β are removed from the Navier--Stokes equation. Within a class of closures, the paradoxical effect is to enhance energy cascade through the Kolmogorov inertial range for 1<β<β/sub c/, where β/sub c/ may be as large as 8. This may have implications with regard to force-free structures in the true Navier--Stokes dynamics

  16. Kolmogorov flow in two dimensional strongly coupled dusty plasma

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, Akanksha; Ganesh, R., E-mail: ganesh@ipr.res.in; Joy, Ashwin [Institute for Plasma Research, Bhat Gandhinagar, Gujarat 382 428 (India)

    2014-07-15

    Undriven, incompressible Kolmogorov flow in two dimensional doubly periodic strongly coupled dusty plasma is modelled using generalised hydrodynamics, both in linear and nonlinear regime. A complete stability diagram is obtained for low Reynolds numbers R and for a range of viscoelastic relaxation time τ{sub m} [0 < τ{sub m} < 10]. For the system size considered, using a linear stability analysis, similar to Navier Stokes fluid (τ{sub m} = 0), it is found that for Reynolds number beyond a critical R, say R{sub c}, the Kolmogorov flow becomes unstable. Importantly, it is found that R{sub c} is strongly reduced for increasing values of τ{sub m}. A critical τ{sub m}{sup c} is found above which Kolmogorov flow is unconditionally unstable and becomes independent of Reynolds number. For R < R{sub c}, the neutral stability regime found in Navier Stokes fluid (τ{sub m} = 0) is now found to be a damped regime in viscoelastic fluids, thus changing the fundamental nature of transition of Kolmogorov flow as function of Reynolds number R. A new parallelized nonlinear pseudo spectral code has been developed and is benchmarked against eigen values for Kolmogorov flow obtained from linear analysis. Nonlinear states obtained from the pseudo spectral code exhibit cyclicity and pattern formation in vorticity and viscoelastic oscillations in energy.

  17. Self-similar spherical gravitational collapse and the cosmic censorship hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Ori, A.; Piran, T.

    1988-01-01

    The authors show that a self-similar general relativistic spherical collapse of a perfect fluid with an adiabatic equation of state p = (lambda -1)rho and low enough lambda values, results in a naked singularity. The singularity is tangent to an event horizon which surrounds a massive singularity and the redshift along a null geodesic from the singularity to an external observer is infinite. The authors believe that this is the most serious counter example to cosmic censorship obtained so far.

  18. Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis.

    Science.gov (United States)

    1982-02-01

    necesoy and Identir 6Y Week nuinber) Choice Models Similarity- Context Effects / I D in nce 4120. AGSTMCT (C- nt1 -u on ~ Oeee ld- It neceee and ideatif...the choices where the decoy was not chosen. In that sample, 63% of the 109 reversals ( CELLS b and d) were to the target and 372 to the competitor...switching from the target, thus merging the decoy and the competitor groups ( CELLS b, d and c). In that test, 59% switched to the target, while 41

  19. Music analysis and Kolmogorov complexity

    DEFF Research Database (Denmark)

    Meredith, David

    The goal of music analysis is to find the most satisfying explanations for musical works. It is proposed that this can best be achieved by attempting to write computer programs that are as short as possible and that generate representations that are as detailed as possible of the music...... that the program represents. If such an effective measure of analysis quality can be found, it could be used in a system that automatically finds the optimal analysis for any passage of music. Measuring program length in terms of number of source-code characters is shown to be problematic and an expression...... is proposed that overcomes some but not all of these problems. It is suggested that the solutions to the remaining problems may lie either in the field of concrete Kolmogorov complexity or in the design of languages specialized for expressing musical structure....

  20. A Short Introduction to Kolmogorov Complexity

    NARCIS (Netherlands)

    Nannen, Volker

    2010-01-01

    This is a short introduction to Kolmogorov complexity and information theory. The interested reader is referred to the literature, especially the textbooks [CT91] and [LV97] which cover the elds of information theory and Kolmogorov complexity in depth and with all the necessary rigor.

  1. Acceptance threshold hypothesis is supported by chemical similarity of cuticular hydrocarbons in a stingless bee, Melipona asilvai.

    Science.gov (United States)

    Nascimento, D L; Nascimento, F S

    2012-11-01

    The ability to discriminate nestmates from non-nestmates in insect societies is essential to protect colonies from conspecific invaders. The acceptance threshold hypothesis predicts that organisms whose recognition systems classify recipients without errors should optimize the balance between acceptance and rejection. In this process, cuticular hydrocarbons play an important role as cues of recognition in social insects. The aims of this study were to determine whether guards exhibit a restrictive level of rejection towards chemically distinct individuals, becoming more permissive during the encounters with either nestmate or non-nestmate individuals bearing chemically similar profiles. The study demonstrates that Melipona asilvai (Hymenoptera: Apidae: Meliponini) guards exhibit a flexible system of nestmate recognition according to the degree of chemical similarity between the incoming forager and its own cuticular hydrocarbons profile. Guards became less restrictive in their acceptance rates when they encounter non-nestmates with highly similar chemical profiles, which they probably mistake for nestmates, hence broadening their acceptance level.

  2. The effects of gravity on human walking: a new test of the dynamic similarity hypothesis using a predictive model.

    Science.gov (United States)

    Raichlen, David A

    2008-09-01

    The dynamic similarity hypothesis (DSH) suggests that differences in animal locomotor biomechanics are due mostly to differences in size. According to the DSH, when the ratios of inertial to gravitational forces are equal between two animals that differ in size [e.g. at equal Froude numbers, where Froude = velocity2/(gravity x hip height)], their movements can be made similar by multiplying all time durations by one constant, all forces by a second constant and all linear distances by a third constant. The DSH has been generally supported by numerous comparative studies showing that as inertial forces differ (i.e. differences in the centripetal force acting on the animal due to variation in hip heights), animals walk with dynamic similarity. However, humans walking in simulated reduced gravity do not walk with dynamically similar kinematics. The simulated gravity experiments did not completely account for the effects of gravity on all body segments, and the importance of gravity in the DSH requires further examination. This study uses a kinematic model to predict the effects of gravity on human locomotion, taking into account both the effects of gravitational forces on the upper body and on the limbs. Results show that dynamic similarity is maintained in altered gravitational environments. Thus, the DSH does account for differences in the inertial forces governing locomotion (e.g. differences in hip height) as well as differences in the gravitational forces governing locomotion.

  3. A.N. Kolmogorov's defence of Mendelism

    Directory of Open Access Journals (Sweden)

    Alan Stark

    2011-01-01

    Full Text Available In 1939 N.I. Ermolaeva published the results of an experiment which repeated parts of Mendel's classical experiments. On the basis of her experiment she concluded that Mendel's principle that self-pollination of hybrid plants gave rise to segregation proportions 3:1 was false. The great probability theorist A.N. Kolmogorov reviewed Ermolaeva's data using a test, now referred to as Kolmogorov's, or Kolmogorov-Smirnov, test, which he had proposed in 1933. He found, contrary to Ermolaeva, that her results clearly confirmed Mendel's principle. This paper shows that there were methodological flaws in Kolmogorov's statistical analysis and presents a substantially adjusted approach, which confirms his conclusions. Some historical commentary on the Lysenko-era background is given, to illuminate the relationship of the disciplines of genetics and statistics in the struggle against the prevailing politically-correct pseudoscience in the Soviet Union. There is a Brazilian connection through the person of Th. Dobzhansky.

  4. The Kolmogorov-Obukhov Statistical Theory of Turbulence

    Science.gov (United States)

    Birnir, Björn

    2013-08-01

    In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.

  5. Kolmogorov Space in Time Series Data

    OpenAIRE

    Kanjamapornkul, K.; Pinčák, R.

    2016-01-01

    We provide the proof that the space of time series data is a Kolmogorov space with $T_{0}$-separation axiom using the loop space of time series data. In our approach we define a cyclic coordinate of intrinsic time scale of time series data after empirical mode decomposition. A spinor field of time series data comes from the rotation of data around price and time axis by defining a new extradimension to time series data. We show that there exist hidden eight dimensions in Kolmogorov space for ...

  6. Testing the shape-similarity hypothesis between particle-size distribution and water retention for Sicilian soils

    Directory of Open Access Journals (Sweden)

    Chiara Antinoro

    2012-12-01

    Full Text Available Application of the Arya and Paris (AP model to estimate the soil water retention curve requires a detailed description of the particlesize distribution (PSD but limited experimental PSD data are generally determined by the conventional sieve-hydrometer (SH method. Detailed PSDs can be obtained by fitting a continuous model to SH data or performing measurements by the laser diffraction (LD method. The AP model was applied to 40 Sicilian soils for which the PSD was measured by both the SH and LD methods. The scale factor was set equal to 1.38 (procedure AP1 or estimated by a logistical model with parameters gathered from literature (procedure AP2. For both SH and LD data, procedure AP2 allowed a more accurate prediction of the water retention than procedure AP1, confirming that it is not convenient to use a unique value of  for soils that are very different in texture. Despite the differences in PSDs obtained by the SH and LD methods, the water retention predicted by a given procedure (AP1 or AP2 using SH or LD data was characterized by the same level of accuracy. Discrepancies in the estimated water retention from the two PSD measurement methods were attributed to underestimation of the finest diameter frequency obtained by the LD method. Analysis also showed that the soil water retention estimated using the SH method was affected by an estimation bias that could be corrected by an optimization procedure (OPT. Comparison of a-distributions and water retention shape indices obtained by the two methods (SH or LD indicated that the shape-similarity hypothesis is better verified if the traditional sieve-hydrometer data are used to apply the AP model. The optimization procedure allowed more accurate predictions of the water retention curves than the traditional AP1 and AP2 procedures. Therefore, OPT can be considered a valid alternative to the more complex logistical model for estimating the water retention curve of Sicilian soils.

  7. Is Variability in Mate Choice Similar for Intelligence and Personality Traits? Testing a Hypothesis about the Evolutionary Genetics of Personality

    Science.gov (United States)

    Stone, Emily A.; Shackelford, Todd K.; Buss, David M.

    2012-01-01

    This study tests the hypothesis presented by Penke, Denissen, and Miller (2007a) that condition-dependent traits, including intelligence, attractiveness, and health, are universally and uniformly preferred as characteristics in a mate relative to traits that are less indicative of condition, including personality traits. We analyzed…

  8. Quantum Kolmogorov complexity and bounded quantum memory

    International Nuclear Information System (INIS)

    Miyadera, Takayuki

    2011-01-01

    The effect of bounded quantum memory in a primitive information protocol has been examined using the quantum Kolmogorov complexity as a measure of information. We employed a toy two-party protocol in which Bob, by using a bounded quantum memory and an unbounded classical memory, estimates a message that was encoded in qubits by Alice in one of the bases X or Z. Our theorem gave a nontrivial effect of the memory boundedness. In addition, a generalization of the uncertainty principle in the presence of quantum memory has been obtained.

  9. Modeling experiments using quantum and Kolmogorov probability

    International Nuclear Information System (INIS)

    Hess, Karl

    2008-01-01

    Criteria are presented that permit a straightforward partition of experiments into sets that can be modeled using both quantum probability and the classical probability framework of Kolmogorov. These new criteria concentrate on the operational aspects of the experiments and lead beyond the commonly appreciated partition by relating experiments to commuting and non-commuting quantum operators as well as non-entangled and entangled wavefunctions. In other words the space of experiments that can be understood using classical probability is larger than usually assumed. This knowledge provides advantages for areas such as nanoscience and engineering or quantum computation.

  10. The Kolmogorov-Smirnov test for the CMB

    International Nuclear Information System (INIS)

    Frommert, Mona; Durrer, Ruth; Michaud, Jérôme

    2012-01-01

    We investigate the statistics of the cosmic microwave background using the Kolmogorov-Smirnov test. We show that, when we correctly de-correlate the data, the partition function of the Kolmogorov stochasticity parameter is compatible with the Kolmogorov distribution and, contrary to previous claims, the CMB data are compatible with Gaussian fluctuations with the correlation function given by standard ΛCDM. We then use the Kolmogorov-Smirnov test to derive upper bounds on residual point source power in the CMB, and indicate the promise of this statistics for further datasets, especially Planck, to search for deviations from Gaussianity and for detecting point sources and Galactic foregrounds

  11. Development of pharmacophore similarity-based quantitative activity hypothesis and its applicability domain: applied on a diverse data-set of HIV-1 integrase inhibitors.

    Science.gov (United States)

    Kumar, Sivakumar Prasanth; Jasrai, Yogesh T; Mehta, Vijay P; Pandya, Himanshu A

    2015-01-01

    Quantitative pharmacophore hypothesis combines the 3D spatial arrangement of pharmacophore features with biological activities of the ligand data-set and predicts the activities of geometrically and/or pharmacophoric similar ligands. Most pharmacophore discovery programs face difficulties in conformational flexibility, molecular alignment, pharmacophore features sampling, and feature selection to score models if the data-set constitutes diverse ligands. Towards this focus, we describe a ligand-based computational procedure to introduce flexibility in aligning the small molecules and generating a pharmacophore hypothesis without geometrical constraints to define pharmacophore space, enriched with chemical features necessary to elucidate common pharmacophore hypotheses (CPHs). Maximal common substructure (MCS)-based alignment method was adopted to guide the alignment of carbon molecules, deciphered the MCS atom connectivity to cluster molecules in bins and subsequently, calculated the pharmacophore similarity matrix with the bin-specific reference molecules. After alignment, the carbon molecules were enriched with original atoms in their respective positions and conventional pharmacophore features were perceived. Distance-based pharmacophoric descriptors were enumerated by computing the interdistance between perceived features and MCS-aligned 'centroid' position. The descriptor set and biological activities were used to develop support vector machine models to predict the activities of the external test set. Finally, fitness score was estimated based on pharmacophore similarity with its bin-specific reference molecules to recognize the best and poor alignments and, also with each reference molecule to predict outliers of the quantitative hypothesis model. We applied this procedure to a diverse data-set of 40 HIV-1 integrase inhibitors and discussed its effectiveness with the reported CPH model.

  12. Optimal control problem for the extended Fisher–Kolmogorov equation

    Indian Academy of Sciences (India)

    In this paper, the optimal control problem for the extended Fisher–Kolmogorov equation is studied. The optimal control under boundary condition is given, the existence of optimal solution to the equation is proved and the optimality system is established.

  13. Quantum Kolmogorov complexity and the quantum Turing machine

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, M.

    2007-08-31

    The purpose of this thesis is to give a formal definition of quantum Kolmogorov complexity and rigorous mathematical proofs of its basic properties. Classical Kolmogorov complexity is a well-known and useful measure of randomness for binary strings. In recent years, several different quantum generalizations of Kolmogorov complexity have been proposed. The most natural generalization is due to A. Berthiaume et al. (2001), defining the complexity of a quantum bit (qubit) string as the length of the shortest quantum input for a universal quantum computer that outputs the desired string. Except for slight modifications, it is this definition of quantum Kolmogorov complexity that we study in this thesis. We start by analyzing certain aspects of the underlying quantum Turing machine (QTM) model in a more detailed formal rigour than was done previously. Afterwards, we apply these results to quantum Kolmogorov complexity. Our first result is a proof of the existence of a universal QTM which simulates every other QTM for an arbitrary number of time steps and than halts with probability one. In addition, we show that every input that makes a QTM almost halt can be modified to make the universal QTM halt entirely, by adding at most a constant number of qubits. It follows that quantum Kolmogorov complexity has the invariance property, i.e. it depends on the choice of the universal QTM only up to an additive constant. Moreover, the quantum complexity of classical strings agrees with classical complexity, again up to an additive constant. The proofs are based on several analytic estimates. Furthermore, we prove several incompressibility theorems for quantum Kolmogorov complexity. Finally, we show that for ergodic quantum information sources, complexity rate and entropy rate coincide with probability one. The thesis is finished with an outlook on a possible application of quantum Kolmogorov complexity in statistical mechanics. (orig.)

  14. Fractal Hypothesis of the Pelagic Microbial Ecosystem—Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    Science.gov (United States)

    Våge, Selina; Thingstad, T. Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929

  15. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  16. Variants of Learning Algorithm Based on Kolmogorov Theorem

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Štědrý, Arnošt; Drkošová, Jitka

    2002-01-01

    Roč. 12, č. 6 (2002), s. 587-597 ISSN 1210-0552 R&D Projects: GA AV ČR IAB1030006 Institutional research plan: AV0Z1030915 Keywords : Kolmogorov networks * approximation theory * parallel algorithms Subject RIV: BA - General Mathematics

  17. Asymptotical Behaviors of Nonautonomous Discrete Kolmogorov System with Time Lags

    Directory of Open Access Journals (Sweden)

    Liu Shengqiang

    2010-01-01

    Full Text Available We discuss a general -species discrete Kolmogorov system with time lags. We build some new results about the sufficient conditions for permanence, extinction, and balancing survival. When applying these results to some Lotka-Volterra systems, we obtain the criteria on harmless delay for the permanence as well as profitless delay for balancing survival.

  18. Asymptotical Behaviors of Nonautonomous Discrete Kolmogorov System with Time Lags

    Directory of Open Access Journals (Sweden)

    Shengqiang Liu

    2010-01-01

    Full Text Available We discuss a general n-species discrete Kolmogorov system with time lags. We build some new results about the sufficient conditions for permanence, extinction, and balancing survival. When applying these results to some Lotka-Volterra systems, we obtain the criteria on harmless delay for the permanence as well as profitless delay for balancing survival.

  19. Kolmogorov-like spectra in decaying three-dimensional supersonic flows

    International Nuclear Information System (INIS)

    Porter, D.H.; Pouquet, A.; Woodward, P.R.

    1994-01-01

    A numerical simulation of decaying supersonic turbulence using the piecewise parabolic method (PPM) algorithm on a computational mesh of 512 3 zones indicates that, once the solenoidal part of the velocity field, representing vortical motions, is fully developed and has reached a self-similar regime, a velocity spectrum compatible with that predicted by the classical theory of Kolmogorov develops. It is followed by a domain with a shallower spectrum. A convergence study is presented to support these assertions. The formation, structure, and evolution of slip surfaces and vortex tubes are presented in terms of perspective volume renderings of fields in physical space

  20. A Kolmogorov-Brutsaert structure function model for evaporation into a turbulent atmosphere

    Science.gov (United States)

    Katul, Gabriel; Liu, Heping

    2017-05-01

    In 1965, Brutsaert proposed a model that predicted mean evaporation rate E¯ from rough surfaces to scale with the 3/4 power law of the friction velocity (u∗) and the square-root of molecular diffusivity (Dm) for water vapor. In arriving at these results, a number of assumptions were made regarding the surface renewal rate describing the contact durations between eddies and the evaporating surface, the diffusional mass process from the surface into eddies, and the cascade of turbulent kinetic energy sustaining the eddy renewal process itself. The working hypothesis explored here is that E¯˜Dmu∗3/4 is a direct outcome of the Kolmogorov scaling for inertial subrange eddies modified to include viscous cutoff thereby bypassing the need for a surface renewal assumption. It is demonstrated that Brutsaert's model for E¯ may be more general than its original derivation implied.

  1. Diffusion coefficient and Kolmogorov entropy of magnetic field lines

    International Nuclear Information System (INIS)

    Zimbardo, G.; Veltri, P.; Malara, F.

    1984-01-01

    A diffusion equation for magnetic field lines of force in a turbulent magnetic field, which describes both the random walk of a single line and how two nearby lines separate from each other, has been obtained using standard statistical techniques. Starting from such an equation, a closed set of equations for the moments may be obtained, in general, with suitable assumptions. From such a set of equations the Kolmogorov entropy may be explicitly calculated. The results have been applied to the most interesting examples of magnetic field geometries. (author)

  2. On Kolmogorov's superpositions and Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on an explicit numerical (i.e., constructive) algorithm for Kolmogorov's superpositions they will show that for obtaining minimum size neutral networks for implementing any Boolean function, the activation function of the neurons is the identity function. Because classical AND-OR implementations, as well as threshold gate implementations require exponential size (in the worst case), it will follow that size-optimal solutions for implementing arbitrary Boolean functions require analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  3. Euclidean distance and Kolmogorov-Smirnov analyses of multi-day auditory event-related potentials: a longitudinal stability study

    Science.gov (United States)

    Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.

    2015-06-01

    The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.

  4. Euclidean distance and Kolmogorov-Smirnov analyses of multi-day auditory event-related potentials: a longitudinal stability study

    International Nuclear Information System (INIS)

    Durato, M V; Nawang, S A; Albano, A M; Rapp, P E

    2015-01-01

    The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test. (paper)

  5. Cumulative growth of minor hysteresis loops in the Kolmogorov model

    International Nuclear Information System (INIS)

    Meilikhov, E. Z.; Farzetdinova, R. M.

    2013-01-01

    The phenomenon of nonrepeatability of successive remagnetization cycles in Co/M (M = Pt, Pd, Au) multilayer film structures is explained in the framework of the Kolmogorov crystallization model. It is shown that this model of phase transitions can be adapted so as to adequately describe the process of magnetic relaxation in the indicated systems with “memory.” For this purpose, it is necessary to introduce some additional elements into the model, in particular, (i) to take into account the fact that every cycle starts from a state “inherited” from the preceding cycle and (ii) to assume that the rate of growth of a new magnetic phase depends on the cycle number. This modified model provides a quite satisfactory qualitative and quantitative description of all features of successive magnetic relaxation cycles in the system under consideration, including the surprising phenomenon of cumulative growth of minor hysteresis loops.

  6. Kolmogorov and Zabih’s Graph Cuts Stereo Matching Algorithm

    Directory of Open Access Journals (Sweden)

    Vladimir Kolmogorov

    2014-10-01

    Full Text Available Binocular stereovision estimates the three-dimensional shape of a scene from two photographs taken from different points of view. In rectified epipolar geometry, this is equivalent to a matching problem. This article describes a method proposed by Kolmogorov and Zabih in 2001, which puts forward an energy-based formulation. The aim is to minimize a four-term-energy. This energy is not convex and cannot be minimized except among a class of perturbations called expansion moves, in which case an exact minimization can be done with graph cuts techniques. One noteworthy feature of this method is that it handles occlusion: The algorithm detects points that cannot be matched with any point in the other image. In this method displacements are pixel accurate (no subpixel refinement.

  7. A multidimensional version of the Kolmogorov-Smirnov test

    International Nuclear Information System (INIS)

    Fasano, G.; Franceschini, A.

    1987-01-01

    A generalization of the classical Kolmogorov-Smirnov test, which is suitable to analyse random samples defined in two or three dimensions is discussed. This test provides some improvements with respect to an earlier version proposed by a previous author. In particular: (i) it is faster, by a factor equal to the sample size, n, and then usable to analyse quite sizeable samples; (ii) it fully takes into account the dependence of the test statistics on the degree of correlation of data points and on the sample size; (iii) it allows for a generalization to the three-dimensional case which is still viable as regards computing time. Supported by a larger number of Monte Carlo simulations, it is ensured that this test is sufficiently distribution-free for any practical purposes. (author)

  8. Kolmogorov-Smirnov test for spatially correlated data

    Science.gov (United States)

    Olea, R.A.; Pawlowsky-Glahn, V.

    2009-01-01

    The Kolmogorov-Smirnov test is a convenient method for investigating whether two underlying univariate probability distributions can be regarded as undistinguishable from each other or whether an underlying probability distribution differs from a hypothesized distribution. Application of the test requires that the sample be unbiased and the outcomes be independent and identically distributed, conditions that are violated in several degrees by spatially continuous attributes, such as topographical elevation. A generalized form of the bootstrap method is used here for the purpose of modeling the distribution of the statistic D of the Kolmogorov-Smirnov test. The innovation is in the resampling, which in the traditional formulation of bootstrap is done by drawing from the empirical sample with replacement presuming independence. The generalization consists of preparing resamplings with the same spatial correlation as the empirical sample. This is accomplished by reading the value of unconditional stochastic realizations at the sampling locations, realizations that are generated by simulated annealing. The new approach was tested by two empirical samples taken from an exhaustive sample closely following a lognormal distribution. One sample was a regular, unbiased sample while the other one was a clustered, preferential sample that had to be preprocessed. Our results show that the p-value for the spatially correlated case is always larger that the p-value of the statistic in the absence of spatial correlation, which is in agreement with the fact that the information content of an uncorrelated sample is larger than the one for a spatially correlated sample of the same size. ?? Springer-Verlag 2008.

  9. What can be efficiently reduced to the Kolmogorov-random strings?

    Czech Academy of Sciences Publication Activity Database

    Allender, E.; Buhrman, H.; Koucký, Michal

    2006-01-01

    Roč. 138, č. 1 (2006), s. 2-19 ISSN 0168-0072 Institutional research plan: CEZ:AV0Z10190503 Keywords : Kolmogorov complexity * Kolmogorov random strings * completeness Subject RIV: BA - General Mathematics Impact factor: 0.582, year: 2006

  10. On Measuring the Complexity of Networks: Kolmogorov Complexity versus Entropy

    Directory of Open Access Journals (Sweden)

    Mikołaj Morzy

    2017-01-01

    Full Text Available One of the most popular methods of estimating the complexity of networks is to measure the entropy of network invariants, such as adjacency matrices or degree sequences. Unfortunately, entropy and all entropy-based information-theoretic measures have several vulnerabilities. These measures neither are independent of a particular representation of the network nor can capture the properties of the generative process, which produces the network. Instead, we advocate the use of the algorithmic entropy as the basis for complexity definition for networks. Algorithmic entropy (also known as Kolmogorov complexity or K-complexity for short evaluates the complexity of the description required for a lossless recreation of the network. This measure is not affected by a particular choice of network features and it does not depend on the method of network representation. We perform experiments on Shannon entropy and K-complexity for gradually evolving networks. The results of these experiments point to K-complexity as the more robust and reliable measure of network complexity. The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and K-complexity as fundamental quantities for constructing complexity measures for networks.

  11. Orbital-angular-momentum photons for optical communication in non-Kolmogorov atmospheric turbulence

    Science.gov (United States)

    Wei, Mei-Song; Wang, Jicheng; Zhang, Yixin; Hu, Zheng-Da

    2018-06-01

    We investigate the effects of non-Kolmogorov atmospheric turbulence on the transmission of orbital-angular-momentum single photons for different turbulence aberrations in optical communication, via the channel capacity. For non-Kolmogorov model, the characteristics of atmosphere turbulence may be determined by different cases, including the increasing altitude, the mutative index-of-refraction structure constant and the power-law exponent of non-Kolmogorov spectrum. It is found that the influences of low-order aberrations, including Z-tilt, defocus, astigmatism, and coma aberrations, are different and the turbulence Z-tilt aberration plays a more important role in the decay of the signal.

  12. Applicability of Taylor's hypothesis in thermally driven turbulence

    Science.gov (United States)

    Kumar, Abhishek; Verma, Mahendra K.

    2018-04-01

    In this paper, we show that, in the presence of large-scale circulation (LSC), Taylor's hypothesis can be invoked to deduce the energy spectrum in thermal convection using real-space probes, a popular experimental tool. We perform numerical simulation of turbulent convection in a cube and observe that the velocity field follows Kolmogorov's spectrum (k-5/3). We also record the velocity time series using real-space probes near the lateral walls. The corresponding frequency spectrum exhibits Kolmogorov's spectrum (f-5/3), thus validating Taylor's hypothesis with the steady LSC playing the role of a mean velocity field. The aforementioned findings based on real-space probes provide valuable inputs for experimental measurements used for studying the spectrum of convective turbulence.

  13. Simulating non-Kolmogorov turbulence phase screens based on equivalent structure constant and its influence on simulations of beam propagation

    Directory of Open Access Journals (Sweden)

    Ming Chen

    Full Text Available Gaussian distribution is used to describe the power law along the propagation path and phase screen of the non-Kolmogorov turbulence is proposed based on the equivalent refractive-index structure constants. Various simulations of Gaussian beam propagation in Kolmogorov and non-Kolmogorov turbulence are used for telling the difference between isotropic and anisotropic turbulence. The results imply that the non-Kolmogorov turbulence makes a great influence on the simulations via power law in spectrum and the number of phase screens. Furthermore, the influence is mainly reflected in light intensity and beam drift. Statistics suggest that when Gaussian beam propagate through single phase screen of non-Kolmogorov, maximum and uniformity of light intensity increase first and then decrease with power law, and beam drift firstly increases and then to stabilize. When Gaussian beam propagate through multiple phase screens, relative errors of beam drift decrease with the number of phase screens. And scintillation indices in non-Kolmogorov turbulence is larger than that in Kolmogorov turbulence when the number is small. When the number is big, the scintillation indices in non-Kolmogorov turbulence is smaller than that in Kolmogorov turbulence. The results shown in this paper demonstrate the effect of the non-Kolmogorov turbulence on laser atmospheric transmissions. Thus, this paper suggests a possible direction of the improvement of the laser transmission accuracy over a long distance through the atmosphere.

  14. Kolmogorov Behavior of Near-Wall Turbulence and Its Application in Turbulence Modeling

    Science.gov (United States)

    Shih, Tsan-Hsing; Lumley, John L.

    1992-01-01

    The near-wall behavior of turbulence is re-examined in a way different from that proposed by Hanjalic and Launder and followers. It is shown that at a certain distance from the wall, all energetic large eddies will reduce to Kolmogorov eddies (the smallest eddies in turbulence). All the important wall parameters, such as friction velocity, viscous length scale, and mean strain rate at the wall, are characterized by Kolmogorov microscales. According to this Kolmogorov behavior of near-wall turbulence, the turbulence quantities, such as turbulent kinetic energy, dissipation rate, etc. at the location where the large eddies become Kolmogorov eddies, can be estimated by using both direct numerical simulation (DNS) data and asymptotic analysis of near-wall turbulence. This information will provide useful boundary conditions for the turbulent transport equations. As an example, the concept is incorporated in the standard k-epsilon model which is then applied to channel and boundary flows. Using appropriate boundary conditions (based on Kolmogorov behavior of near-wall turbulence), there is no need for any wall-modification to the k-epsilon equations (including model constants). Results compare very well with the DNS and experimental data.

  15. A generalized self-similar spectrum for decaying homogeneous and isotropic turbulence

    Science.gov (United States)

    Yang, Pingfan; Pumir, Alain; Xu, Haitao

    2017-11-01

    The spectrum of turbulence in dissipative and inertial range can be described by the celebrated Kolmogorov theory. However, there is no general solution of the spectrum in the large scales, especially for statistically unsteady turbulent flows. Here we propose a generalized self-similar form that contains two length-scales, the integral scale and the Kolmogorov scale, for decaying homogeneous and isotropic turbulence. With the help of the local spectral energy transfer hypothesis by Pao (Phys. Fluids, 1965), we derive and solve for the explicit form of the energy spectrum and the energy transfer function, from which the second- and third-order velocity structure functions can also be obtained. We check and verify our assumptions by direct numerical simulations (DNS), and our solutions of the velocity structure functions compare well with hot-wire measurements of high-Reynolds number wind-tunnel turbulence. Financial supports from NSFC under Grant Number 11672157, from the Alexander von Humboldt Foundation, and from the MPG are gratefully acknowledged.

  16. Multifractal scaling at the Kolmogorov microscale in fully developed compressible turbulence

    International Nuclear Information System (INIS)

    Shivamoggi, B.K.

    1995-01-01

    In this paper, some aspects of multifractal scaling at the Kolmogorov microscale in fully developed compressible turbulence are considered. These considerations, on the one hand, provide an insight into the mechanism of compressible turbulence, and on the other hand enable one to determine the robustness of some known results in incompressible turbulence. copyright 1995 Academic Press, Inc

  17. A Short Introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length (MDL)

    NARCIS (Netherlands)

    Nannen, Volker

    2010-01-01

    The concept of overtting in model selection is explained and demon- strated. After providing some background information on information theory and Kolmogorov complexity, we provide a short explanation of Minimum Description Length and error minimization. We conclude with a discussion of the typical

  18. On the Existence of the Kolmogorov Inertial Range in the Terrestrial Magnetosheath Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Huang, S. Y.; Yuan, Z. G. [School of Electronic Information, Wuhan University, Wuhan (China); Hadid, L. Z.; Sahraoui, F. [Laboratoire de Physique des Plasmas, CNRS-Ecole Polytechnique-UPMC, Palaiseau (France); Deng, X. H., E-mail: shiyonghuang@whu.edu.cn [Institute of Space Science and Technology, Nanchang University, Nanchang (China)

    2017-02-10

    In the solar wind, power spectral density (PSD) of the magnetic field fluctuations generally follow the so-called Kolmogorov spectrum f {sup −5/3} in the inertial range, where the dynamics is thought to be dominated by nonlinear interactions between counter-propagating incompressible Alfvén wave parquets. These features are thought to be ubiquitous in space plasmas. The present study gives a new and more complex picture of magnetohydrodynamic (MHD) turbulence as observed in the terrestrial magnetosheath. The study uses three years of in situ data from the Cluster mission to explore the nature of the magnetic fluctuations at MHD scales in different locations within the magnetosheath, including flanks and subsolar regions. It is found that the magnetic field fluctuations at MHD scales generally have a PSD close to f {sup −1} (shallower than the Kolmogorov one f {sup −5/3}) down to the ion characteristic scale, which recalls the energy-containing scales of solar wind turbulence. The Kolmogorov spectrum is observed only away from the bow shock toward the flank and the magnetopause regions in 17% of the analyzed time intervals. Measuring the magnetic compressibility, it is shown that only a fraction (35%) of the observed Kolmogorov spectra was populated by shear Alfvénic fluctuations, whereas the majority of the events (65%) was found to be dominated by compressible magnetosonic-like fluctuations, which contrasts with well-known turbulence properties in the solar wind. This study gives a first comprehensive view of the origin of the f {sup −1} and the transition to the Kolmogorov inertial range; both questions remain controversial in solar wind turbulence.

  19. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  20. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    Science.gov (United States)

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  1. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  2. Randomness Representation of Turbulence in Canopy Flows Using Kolmogorov Complexity Measures

    Directory of Open Access Journals (Sweden)

    Dragutin Mihailović

    2017-09-01

    Full Text Available Turbulence is often expressed in terms of either irregular or random fluid flows, without quantification. In this paper, a methodology to evaluate the randomness of the turbulence using measures based on the Kolmogorov complexity (KC is proposed. This methodology is applied to experimental data from a turbulent flow developing in a laboratory channel with canopy of three different densities. The methodology is even compared with the traditional approach based on classical turbulence statistics.

  3. Experimental Data Does Not Violate Bell's Inequality for "Right Kolmogorov Space''

    DEFF Research Database (Denmark)

    Fischer, Paul; Avis, David; Hilbert, Astrid

    2008-01-01

    of polarization beam splitters (PBSs). In fact, such data consists of some conditional probabilities which only partially define a probability space. Ignoring this conditioning leads to apparent contradictions in the classical probabilistic model (due to Kolmogorov). We show how to make a completely consistent...... probabilistic model by taking into account the probabilities of selecting the settings of the PBSs. Our model matches both the experimental data and is consistent with classical probability theory....

  4. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    Science.gov (United States)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  5. Propagation of coherently combined truncated laser beam arrays with beam distortions in non-Kolmogorov turbulence.

    Science.gov (United States)

    Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin

    2012-08-10

    The propagation properties of coherently combined truncated laser beam arrays with beam distortions through non-Kolmogorov turbulence are studied in detail both analytically and numerically. The analytical expressions for the average intensity and the beam width of coherently combined truncated laser beam arrays with beam distortions propagating through turbulence are derived based on the combination of statistical optics methods and the extended Huygens-Fresnel principle. The effect of beam distortions, such as amplitude modulation and phase fluctuation, is studied by numerical examples. The numerical results reveal that phase fluctuations have significant influence on the spreading of coherently combined truncated laser beam arrays in non-Kolmogorov turbulence, and the effects of the phase fluctuations can be negligible as long as the phase fluctuations are controlled under a certain level, i.e., a>0.05 for the situation considered in the paper. Furthermore, large phase fluctuations can convert the beam distribution rapidly to a Gaussian form, vary the spreading, weaken the optimum truncation effects, and suppress the dependence of spreading on the parameters of the non-Kolmogorov turbulence.

  6. Application of the Fokker-Plank-Kolmogorov equation for affluence forecast to hydropower reservoirs (Betania Case)

    International Nuclear Information System (INIS)

    Dominguez Calle, Efrain Antonio

    2004-01-01

    This paper shows a modeling technique to forecast probability density curves for the flows that represent the monthly affluence to hydropower reservoirs. Briefly, the factors that require affluence forecast in terms of probabilities, the ranges of existing forecast methods as well as the contradiction between those techniques and the real requirements of decision-making procedures are pointed out. The mentioned contradiction is resolved applying the Fokker-Planck-Kolmogorov equation that describes the time evolution of a stochastic process that can be considered as markovian. We show the numerical scheme for this equation, its initial and boundary conditions, and its application results in the case of Betania's reservoir

  7. Approximate solution to the Kolmogorov equation for a fission chain-reacting system

    International Nuclear Information System (INIS)

    Ruby, L.; McSwine, T.L.

    1986-01-01

    An approximate solution has been obtained for the Kolmogorov equation describing a fission chain-reacting system. The method considers the population of neutrons, delayed-neutron precursors, and detector counts. The effect of the detector is separated from the statistics of the chain reaction by a weak coupling assumption that predicts that the detector responds to the average rather than to the instantaneous neutron population. An approximate solution to the remaining equation, involving the populations of neutrons and precursors, predicts a negative-binomial behaviour for the neutron probability distribution

  8. The pervasive reach of resource-bounded Kolmogorov complexity in computational complexity theory

    Czech Academy of Sciences Publication Activity Database

    Allender, E.; Koucký, Michal; Ronneburger, D.; Roy, S.

    2011-01-01

    Roč. 77, č. 1 (2011), s. 14-40 ISSN 0022-0000 R&D Projects: GA ČR GAP202/10/0854; GA MŠk(CZ) 1M0545; GA AV ČR IAA100190902 Institutional research plan: CEZ:AV0Z10190503 Keywords : Circuit complexity * Distinguishing complexity * FewEXP * Formula size * Kolmogorov complexity Subject RIV: BA - General Mathematics Impact factor: 1.157, year: 2011 http://www.sciencedirect.com/science/article/pii/S0022000010000887

  9. KARHUNEN-LOÈVE Basis Functions of Kolmogorov Turbulence in the Sphere

    Science.gov (United States)

    Mathar, Richard J.

    In support of modeling atmospheric turbulence, the statistically independent Karhunen-Loève modes of refractive indices with isotropic Kolmogorov spectrum of the covariance are calculated inside a sphere of fixed radius, rendered as series of 3D Zernike functions. Many of the symmetry arguments of the well-known associated 2D problem for the circular input pupil remain valid. The technique of efficient diagonalization of the eigenvalue problem in wavenumber space is founded on the Fourier representation of the 3D Zernike basis, and extensible to the von-Kármán power spectrum.

  10. Dynamics, integrability and topology for some classes of Kolmogorov Hamiltonian systems in R+4

    Science.gov (United States)

    Llibre, Jaume; Xiao, Dongmei

    2017-02-01

    In this paper we first give the sufficient and necessary conditions in order that two classes of polynomial Kolmogorov systems in R+4 are Hamiltonian systems. Then we study the integrability of these Hamiltonian systems in the Liouville sense. Finally, we investigate the global dynamics of the completely integrable Lotka-Volterra Hamiltonian systems in R+4. As an application of the invariant subsets of these systems, we obtain topological classifications of the 3-submanifolds in R+4 defined by the hypersurfaces axy + bzw + cx2 y + dxy2 + ez2 w + fzw2 = h, where a , b , c , d , e , f , w and h are real constants.

  11. LT^2C^2: A language of thought with Turing-computable Kolmogorov complexity

    Directory of Open Access Journals (Sweden)

    Santiago Figueira

    2013-03-01

    Full Text Available In this paper, we present a theoretical effort to connect the theory of program size to psychology by implementing a concrete language of thought with Turing-computable Kolmogorov complexity (LT^2C^2 satisfying the following requirements: 1 to be simple enough so that the complexity of any given finite binary sequence can be computed, 2 to be based on tangible operations of human reasoning (printing, repeating,. . . , 3 to be sufficiently powerful to generate all possible sequences but not too powerful as to identify regularities which would be invisible to humans. We first formalize LT^2C^2, giving its syntax and semantics, and defining an adequate notion of program size. Our setting leads to a Kolmogorov complexity function relative to LT^2C^2 which is computable in polynomial time, and it also induces a prediction algorithm in the spirit of Solomonoff’s inductive inference theory. We then prove the efficacy of this language by investigating regularities in strings produced by participants attempting to generate random strings. Participants had a profound understanding of randomness and hence avoided typical misconceptions such as exaggerating the number of alternations. We reasoned that remaining regularities would express the algorithmic nature of human thoughts, revealed in the form of specific patterns. Kolmogorov complexity relative to LT^2C^2 passed three expected tests examined here: 1 human sequences were less complex than control PRNG sequences, 2 human sequences were not stationary showing decreasing values of complexity resulting from fatigue 3 each individual showed traces of algorithmic stability since fitting of partial data was more effective to predict subsequent data than average fits. This work extends on previous efforts to combine notions of Kolmogorov complexity theory and algorithmic information theory to psychology, by explicitly proposing a language which may describe the patterns of human thoughts.Received: 12

  12. Generalized Kolmogorov--von Karman relation and some further implications on the magnitude of the constants

    International Nuclear Information System (INIS)

    Frenzen, P.

    1975-01-01

    The relation between the Kolmogorov and von Karman constants in the atmospheric surface boundary layer appropriate to the special conditions of neutrally stratified and locally dissipating flow is essentially a straightforward combination of the logarithmic wind profile, the one-dimensional spectral relation for turbulent energy density in the inertial subrange, and a reduced turbulent energy equation that balances the dissipation rate with a mechanical production term alone. The effects of the stability-dependent, dimensionless wind shear, the diabatic wind profile (an integral of the above), on the complete energy equation are discussed

  13. Representation of the Kolmogorov model having all distinguishing features of quantum probabilistic model

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2003-01-01

    The contextual approach to the Kolmogorov probability model gives the possibility to represent this conventional model as a quantum structure, i.e., by using complex amplitudes of probabilities (or in the abstract approach - in a Hilbert space). Classical (Kolmogorovian) random variables are represented by in general noncommutative operators in the Hilbert space. The existence of such a contextual representation of the Kolmogorovian model looks very surprising in the view of the orthodox quantum tradition. However, our model can peacefully coexist with various 'no-go' theorems (e.g., von Neumann, Kochen and Specker, Bell, ...)

  14. A Kolmogorov Complexity View of Analogy: From Logical Modeling to Experimentations

    Science.gov (United States)

    Bayoudh, Meriam; Prade, Henri; Richard, Gilles

    Analogical reasoning is considered as one of the main mechanisms underlying human intelligence and creativity, allowing the paradigm shift essential to a creative process. More specific is the notion of analogical proportion like "2 is to 4 as 5 is to 10" or "read is to reader as lecture is to lecturer": such statements can be precisely described within an algebraic framework. When the proportion holds between concepts as in "engine is to car as heart is to human" or "wine is to France as beer is to England", applying an algebraic framework is less straightforward and a new way to understand analogical proportions on the basis of Kolmogorov complexity theory may seem more appropriate. This viewpoint has been used to develop a classifier detecting analogies in natural language. Despite their apparent difference, it is quite clear that the two viewpoints should be strongly related. In this paper, we investigate the link between a purely abstract view of analogical proportions and a definition based on Kolmogorov complexity theory. This theory is used as a backbone to experiment a classifier of natural language analogies whose results are consistent with the abstract setting.

  15. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  16. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  17. Influence of wind speed on free space optical communication performance for Gaussian beam propagation through non Kolmogorov strong turbulence

    International Nuclear Information System (INIS)

    Deng Peng; Yuan Xiuhua; Zeng Yanan; Zhao Ming; Luo Hanjun

    2011-01-01

    In free-space optical communication links, atmospheric turbulence causes fluctuations in both the intensity and the phase of the received signal, affecting link performance. Most theoretical treatments have been described by Kolmogorov's power spectral density model through weak turbulence with constant wind speed. However, several experiments showed that Kolmogorov theory is sometimes incomplete to describe atmospheric turbulence properly, especially through the strong turbulence with variable wind speed, which is known to contribute significantly to the turbulence in the atmosphere. We present an optical turbulence model that incorporates into variable wind speed instead of constant value, a non-Kolmogorov power spectrum that uses a generalized exponent instead of constant standard exponent value 11/3, and a generalized amplitude factor instead of constant value 0.033. The free space optical communication performance for a Gaussian beam wave of scintillation index, mean signal-to-noise ratio , and mean bit error rate , have been derived by extended Rytov theory in non-Kolmogorov strong turbulence. And then the influence of wind speed variations on free space optical communication performance has been analyzed under different atmospheric turbulence intensities. The results suggest that the effects of wind speed variation through non-Kolmogorov turbulence on communication performance are more severe in many situations and need to be taken into account in free space optical communication. It is anticipated that this work is helpful to the investigations of free space optical communication performance considering wind speed under severe weather condition in the strong atmospheric turbulence.

  18. THE FRACTAL MARKET HYPOTHESIS

    OpenAIRE

    FELICIA RAMONA BIRAU

    2012-01-01

    In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and...

  19. Variability: A Pernicious Hypothesis.

    Science.gov (United States)

    Noddings, Nel

    1992-01-01

    The hypothesis of greater male variability in test results is discussed in its historical context, and reasons feminists have objected to the hypothesis are considered. The hypothesis acquires political importance if it is considered that variability results from biological, rather than cultural, differences. (SLD)

  20. THE FRACTAL MARKET HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    FELICIA RAMONA BIRAU

    2012-05-01

    Full Text Available In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and of course, the manner in which they interpret that information may be different. Also, Fractal Market Hypothesis refers to the way that liquidity and investment horizons influence the behaviour of financial investors.

  1. Sub critical transition to turbulence in three-dimensional Kolmogorov flow

    Energy Technology Data Exchange (ETDEWEB)

    Veen, Lennaert van [University of Ontario Institute of Technology, 2000 Simcoe Street North, L1H 7K4 Oshawa, Ontario (Canada); Goto, Susumu, E-mail: lennaert.vanveen@uoit.ca [Graduate School of Engineering Science, Osaka University 1–3 Machikaneyama, Toyonaka, Osaka, 560-8531 Japan (Japan)

    2016-12-15

    We study Kolmogorov flow on a three dimensional, periodic domain with aspect ratios fixed to unity. Using an energy method, we give a concise proof of the linear stability of the laminar flow profile. Since turbulent motion is observed for high enough Reynolds numbers, we expect the domain of attraction of the laminar flow to be bounded by the stable manifolds of simple invariant solutions. We show one such edge state to be an equilibrium with a spatial structure reminiscent of that found in plane Couette flow, with streamwise rolls on the largest spatial scales. When tracking the edge state, we find two branches of solutions that join in a saddle node bifurcation at a finite Reynolds number. (paper)

  2. On the construction of the Kolmogorov normal form for the Trojan asteroids

    CERN Document Server

    Gabern, F; Locatelli, U

    2004-01-01

    In this paper we focus on the stability of the Trojan asteroids for the planar Restricted Three-Body Problem (RTBP), by extending the usual techniques for the neighbourhood of an elliptic point to derive results in a larger vicinity. Our approach is based on the numerical determination of the frequencies of the asteroid and the effective computation of the Kolmogorov normal form for the corresponding torus. This procedure has been applied to the first 34 Trojan asteroids of the IAU Asteroid Catalog, and it has worked successfully for 23 of them. The construction of this normal form allows for computer-assisted proofs of stability. To show it, we have implemented a proof of existence of families of invariant tori close to a given asteroid, for a high order expansion of the Hamiltonian. This proof has been successfully applied to three Trojan asteroids.

  3. Flat-topped beam transmittance in anisotropic non-Kolmogorov turbulent marine atmosphere

    Science.gov (United States)

    Ata, Yalçın; Baykal, Yahya

    2017-10-01

    Turbulence affects optical propagation, and, as a result, the intensity is attenuated along the path of propagation. The attenuation becomes significant when the turbulence becomes stronger. Transmittance is a measure indicating how much power is collected at the receiver after the optical wave propagates in the turbulent medium. The on-axis transmittance is formulated when a flat-topped optical beam propagates in a marine atmosphere experiencing anisotropic non-Kolmogorov turbulence. Variations in the transmittance are evaluated versus the beam source size, beam number, link distance, power law exponent, anisotropy factor, and structure constant. It is found that larger beam source sizes and beam numbers yield higher transmittance values; however, as the link distance, power law exponent, anisotropy factor, or structure constant increase, transmittance values are lowered. Our results will help in the performance evaluations of optical wireless communication and optical imaging systems operating in a marine atmosphere.

  4. Kolmogorov spectra of long wavelength ion-drift waves in dusty plasmas

    International Nuclear Information System (INIS)

    Onishchenko, O.G.; Pokhotelov, O.A.; Sagdeev, R.Z.; Pavlenko, V.P.; Stenflo, L.; Shukla, P.K.; Zolotukhin, V.V.

    2002-01-01

    Weakly turbulent Kolmogorov spectra of ion-drift waves in dusty plasmas with an arbitrary ratio between the ion-drift and the Shukla-Varma frequencies are investigated. It is shown that in the long wavelength limit, when the contribution to the wave dispersion associated with the inhomogeneity of the dust component is larger than that related to the plasma inhomogeneity, the wave dispersion and the matrix interaction element coincide with those for the Rossby or the electron-drift waves described by the Charney or Hasegawa-Mima equations with an accuracy of unessential numerical coefficients. It is found that the weakly turbulent spectra related to the conservation of the wave energy are local and thus the energy flux is directed towards smaller spatial scales

  5. Microstructure development in Kolmogorov, Johnson-Mehl, and Avrami nucleation and growth kinetics

    Science.gov (United States)

    Pineda, Eloi; Crespo, Daniel

    1999-08-01

    A statistical model with the ability to evaluate the microstructure developed in nucleation and growth kinetics is built in the framework of the Kolmogorov, Johnson-Mehl, and Avrami theory. A populational approach is used to compute the observed grain-size distribution. The impingement process which delays grain growth is analyzed, and the effective growth rate of each population is estimated considering the previous grain history. The proposed model is integrated for a wide range of nucleation and growth protocols, including constant nucleation, pre-existing nuclei, and intermittent nucleation with interface or diffusion-controlled grain growth. The results are compared with Monte Carlo simulations, giving quantitative agreement even in cases where previous models fail.

  6. Physiopathological Hypothesis of Cellulite

    Science.gov (United States)

    de Godoy, José Maria Pereira; de Godoy, Maria de Fátima Guerreiro

    2009-01-01

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct diagnosis of cellulite and the technique employed are fundamental to success. PMID:19756187

  7. Life Origination Hydrate Hypothesis (LOH-Hypothesis

    Directory of Open Access Journals (Sweden)

    Victor Ostrovskii

    2012-01-01

    Full Text Available The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis, according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides, DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.

  8. The Bergschrund Hypothesis Revisited

    Science.gov (United States)

    Sanders, J. W.; Cuffey, K. M.; MacGregor, K. R.

    2009-12-01

    After Willard Johnson descended into the Lyell Glacier bergschrund nearly 140 years ago, he proposed that the presence of the bergschrund modulated daily air temperature fluctuations and enhanced freeze-thaw processes. He posited that glaciers, through their ability to birth bergschrunds, are thus able to induce rapid cirque headwall retreat. In subsequent years, many researchers challenged the bergschrund hypothesis on grounds that freeze-thaw events did not occur at depth in bergschrunds. We propose a modified version of Johnson’s original hypothesis: that bergschrunds maintain subfreezing temperatures at values that encourage rock fracture via ice lensing because they act as a cold air trap in areas that would otherwise be held near zero by temperate glacial ice. In support of this claim we investigated three sections of the bergschrund at the West Washmawapta Glacier, British Columbia, Canada, which sits in an east-facing cirque. During our bergschrund reconnaissance we installed temperature sensors at multiple elevations, light sensors at depth in 2 of the 3 locations and painted two 1 m2 sections of the headwall. We first emphasize bergschrunds are not wanting for ice: verglas covers significant fractions of the headwall and icicles dangle from the base of bödens or overhanging rocks. If temperature, rather than water availability, is the limiting factor governing ice-lensing rates, our temperature records demonstrate that the bergschrund provides a suitable environment for considerable rock fracture. At the three sites (north, west, and south walls), the average temperature at depth from 9/3/2006 to 8/6/2007 was -3.6, -3.6, and -2.0 °C, respectively. During spring, when we observed vast amounts of snow melt trickle in to the bergschrund, temperatures averaged -3.7, -3.8, and -2.2 °C, respectively. Winter temperatures are even lower: -8.5, -7.3, and -2.4 °C, respectively. Values during the following year were similar. During the fall, diurnal

  9. Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Cowell, Andrew J [Kennewick, WA; Gregory, Michelle L [Richland, WA; Baddeley, Robert L [Richland, WA; Paulson, Patrick R [Pasco, WA; Tratz, Stephen C [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2012-03-20

    Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture are described according to some aspects. In one aspect, a hypothesis analysis method includes providing a hypothesis, providing an indicator which at least one of supports and refutes the hypothesis, using the indicator, associating evidence with the hypothesis, weighting the association of the evidence with the hypothesis, and using the weighting, providing information regarding the accuracy of the hypothesis.

  10. A Molecular–Structure Hypothesis

    Directory of Open Access Journals (Sweden)

    Jan C. A. Boeyens

    2010-11-01

    Full Text Available The self-similar symmetry that occurs between atomic nuclei, biological growth structures, the solar system, globular clusters and spiral galaxies suggests that a similar pattern should characterize atomic and molecular structures. This possibility is explored in terms of the current molecular structure-hypothesis and its extension into four-dimensional space-time. It is concluded that a quantum molecule only has structure in four dimensions and that classical (Newtonian structure, which occurs in three dimensions, cannot be simulated by quantum-chemical computation.

  11. On the Keyhole Hypothesis

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare B.; Kidmose, Preben; Hansen, Lars Kai

    2017-01-01

    simultaneously recorded scalp EEG. A cross-validation procedure was employed to ensure unbiased estimates. We present several pieces of evidence in support of the keyhole hypothesis: There is a high mutual information between data acquired at scalp electrodes and through the ear-EEG "keyhole," furthermore we......We propose and test the keyhole hypothesis that measurements from low dimensional EEG, such as ear-EEG reflect a broadly distributed set of neural processes. We formulate the keyhole hypothesis in information theoretical terms. The experimental investigation is based on legacy data consisting of 10...

  12. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    Directory of Open Access Journals (Sweden)

    Closas Pau

    2012-10-01

    Full Text Available Abstract Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season, 38−50 (2009-2010 season, weeks 50−9 (2010-2011 season and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could

  13. Validity of Linder Hypothesis in Bric Countries

    Directory of Open Access Journals (Sweden)

    Rana Atabay

    2016-03-01

    Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.

  14. On the Link Between Kolmogorov Microscales and Friction in Wall-Bounded Flow of Viscoplastic Fluids

    Science.gov (United States)

    Ramos, Fabio; Anbarlooei, Hamid; Cruz, Daniel; Silva Freire, Atila; Santos, Cecilia M.

    2017-11-01

    Most discussions in literature on the friction coefficient of turbulent flows of fluids with complex rheology are empirical. As a rule, theoretical frameworks are not available even for some relatively simple constitutive models. In this work, we present a new family of formulas for the evaluation of the friction coefficient of turbulent flows of a large family of viscoplastic fluids. The developments combine an unified analysis for the description of the Kolmogorov's micro-scales and the phenomenological turbulence model of Gioia and Chakraborty. The resulting Blasius-type friction equation has only Blasius' constant as a parameter, and tests against experimental data show excellent agreement over a significant range of Hedstrom and Reynolds numbers. The limits of the proposed model are also discussed. We also comment on the role of the new formula as a possible benchmark test for the convergence of DNS simulations of viscoplastic flows. The friction formula also provides limits for the Maximum Drag Reduction (MDR) for viscoplastic flows, which resembles MDR asymptote for viscoelastic flows.

  15. Modification of Kolmogorov-Smirnov test for DNA content data analysis through distribution alignment.

    Science.gov (United States)

    Huang, Shuguang; Yeo, Adeline A; Li, Shuyu Dan

    2007-10-01

    The Kolmogorov-Smirnov (K-S) test is a statistical method often used for comparing two distributions. In high-throughput screening (HTS) studies, such distributions usually arise from the phenotype of independent cell populations. However, the K-S test has been criticized for being overly sensitive in applications, and it often detects a statistically significant difference that is not biologically meaningful. One major reason is that there is a common phenomenon in HTS studies that systematic drifting exists among the distributions due to reasons such as instrument variation, plate edge effect, accidental difference in sample handling, etc. In particular, in high-content cellular imaging experiments, the location shift could be dramatic since some compounds themselves are fluorescent. This oversensitivity of the K-S test is particularly overpowered in cellular assays where the sample sizes are very big (usually several thousands). In this paper, a modified K-S test is proposed to deal with the nonspecific location-shift problem in HTS studies. Specifically, we propose that the distributions are "normalized" by density curve alignment before the K-S test is conducted. In applications to simulation data and real experimental data, the results show that the proposed method has improved specificity.

  16. A Kolmogorov-Smirnov Based Test for Comparing the Predictive Accuracy of Two Sets of Forecasts

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2015-08-01

    Full Text Available This paper introduces a complement statistical test for distinguishing between the predictive accuracy of two sets of forecasts. We propose a non-parametric test founded upon the principles of the Kolmogorov-Smirnov (KS test, referred to as the KS Predictive Accuracy (KSPA test. The KSPA test is able to serve two distinct purposes. Initially, the test seeks to determine whether there exists a statistically significant difference between the distribution of forecast errors, and secondly it exploits the principles of stochastic dominance to determine whether the forecasts with the lower error also reports a stochastically smaller error than forecasts from a competing model, and thereby enables distinguishing between the predictive accuracy of forecasts. We perform a simulation study for the size and power of the proposed test and report the results for different noise distributions, sample sizes and forecasting horizons. The simulation results indicate that the KSPA test is correctly sized, and robust in the face of varying forecasting horizons and sample sizes along with significant accuracy gains reported especially in the case of small sample sizes. Real world applications are also considered to illustrate the applicability of the proposed KSPA test in practice.

  17. Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines

    Science.gov (United States)

    Delahaye, Jean-Paul; Gauvrit, Nicolas

    2014-01-01

    Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all binary strings of length and for most strings of length by running all Turing machines with 5 states and 2 symbols ( with reduction techniques) using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short) strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com. PMID:24809449

  18. Calculating Kolmogorov complexity from the output frequency distributions of small Turing machines.

    Directory of Open Access Journals (Sweden)

    Fernando Soler-Toscano

    Full Text Available Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all Σ(n=1(11 2(n binary strings of length n<12 and for most strings of length 12≤n≤16 by running all ~2.5 x 10(13 Turing machines with 5 states and 2 symbols (8 x 22(9 with reduction techniques using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short strings. Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com.

  19. Linear growth of the entanglement entropy and the Kolmogorov-Sinai rate

    Science.gov (United States)

    Bianchi, Eugenio; Hackl, Lucas; Yokomizo, Nelson

    2018-03-01

    The rate of entropy production in a classical dynamical system is characterized by the Kolmogorov-Sinai entropy rate h KS given by the sum of all positive Lyapunov exponents of the system. We prove a quantum version of this result valid for bosonic systems with unstable quadratic Hamiltonian. The derivation takes into account the case of time-dependent Hamiltonians with Floquet instabilities. We show that the entanglement entropy S A of a Gaussian state grows linearly for large times in unstable systems, with a rate Λ A ≤ h KS determined by the Lyapunov exponents and the choice of the subsystem A. We apply our results to the analysis of entanglement production in unstable quadratic potentials and due to periodic quantum quenches in many-body quantum systems. Our results are relevant for quantum field theory, for which we present three applications: a scalar field in a symmetry-breaking potential, parametric resonance during post-inflationary reheating and cosmological perturbations during inflation. Finally, we conjecture that the same rate Λ A appears in the entanglement growth of chaotic quantum systems prepared in a semiclassical state.

  20. Gender similarities and differences.

    Science.gov (United States)

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  1. Questioning the social intelligence hypothesis.

    Science.gov (United States)

    Holekamp, Kay E

    2007-02-01

    The social intelligence hypothesis posits that complex cognition and enlarged "executive brains" evolved in response to challenges that are associated with social complexity. This hypothesis has been well supported, but some recent data are inconsistent with its predictions. It is becoming increasingly clear that multiple selective agents, and non-selective constraints, must have acted to shape cognitive abilities in humans and other animals. The task now is to develop a larger theoretical framework that takes into account both inter-specific differences and similarities in cognition. This new framework should facilitate consideration of how selection pressures that are associated with sociality interact with those that are imposed by non-social forms of environmental complexity, and how both types of functional demands interact with phylogenetic and developmental constraints.

  2. A revisited Johnson-Mehl-Avrami-Kolmogorov model and the evolution of grain-size distributions in steel

    OpenAIRE

    Hömberg, D.; Patacchini, F. S.; Sakamoto, K.; Zimmer, J.

    2016-01-01

    The classical Johnson-Mehl-Avrami-Kolmogorov approach for nucleation and growth models of diffusive phase transitions is revisited and applied to model the growth of ferrite in multiphase steels. For the prediction of mechanical properties of such steels, a deeper knowledge of the grain structure is essential. To this end, a Fokker-Planck evolution law for the volume distribution of ferrite grains is developed and shown to exhibit a log-normally distributed solution. Numerical parameter studi...

  3. Modification of the Kolmogorov-Johnson-Mehl-Avrami rate equation for non-isothermal experiments and its analytical solution

    OpenAIRE

    Farjas, Jordi; Roura, Pere

    2008-01-01

    Avrami's model describes the kinetics of phase transformation under the assumption of spatially random nucleation. In this paper we provide a quasi-exact analytical solution of Avrami's model when the transformation takes place under continuous heating. This solution has been obtained with different activation energies for both nucleation and growth rates. The relation obtained is also a solution of the so-called Kolmogorov-Johnson-Mehl-Avrami transformation rate equation. The corresponding n...

  4. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  5. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  6. Revisiting the Dutch hypothesis

    NARCIS (Netherlands)

    Postma, Dirkje S.; Weiss, Scott T.; van den Berge, Maarten; Kerstjens, Huib A. M.; Koppelman, Gerard H.

    The Dutch hypothesis was first articulated in 1961, when many novel and advanced scientific techniques were not available, such as genomics techniques for pinpointing genes, gene expression, lipid and protein profiles, and the microbiome. In addition, computed tomographic scans and advanced analysis

  7. The Lehman Sisters Hypothesis

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2014-01-01

    markdownabstract__Abstract__ This article explores the Lehman Sisters Hypothesis. It reviews empirical literature about gender differences in behavioral, experimental, and neuro-economics as well as in other fields of behavioral research. It discusses gender differences along three dimensions of

  8. Bayesian Hypothesis Testing

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H0.

  9. The Drift Burst Hypothesis

    OpenAIRE

    Christensen, Kim; Oomen, Roel; Renò, Roberto

    2016-01-01

    The Drift Burst Hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent US equity and Treasury flash crashes can be viewed as two high profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms such as feedback trading. At a theoretical level, we show how to build drift bursts into the...

  10. Hypothesis in research

    Directory of Open Access Journals (Sweden)

    Eudaldo Enrique Espinoza Freire

    2018-01-01

    Full Text Available It is intended with this work to have a material with the fundamental contents, which enable the university professor to formulate the hypothesis, for the development of an investigation, taking into account the problem to be solved. For its elaboration, the search of information in primary documents was carried out, such as thesis of degree and reports of research results, selected on the basis of its relevance with the analyzed subject, current and reliability, secondary documents, as scientific articles published in journals of recognized prestige, the selection was made with the same terms as in the previous documents. It presents a conceptualization of the updated hypothesis, its characterization and an analysis of the structure of the hypothesis in which the determination of the variables is deepened. The involvement of the university professor in the teaching-research process currently faces some difficulties, which are manifested, among other aspects, in an unstable balance between teaching and research, which leads to a separation between them.

  11. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji; Genton, Marc G.; Jun, Mikyoung

    2016-01-01

    extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi

  12. Antiaging therapy: a prospective hypothesis

    Directory of Open Access Journals (Sweden)

    Shahidi Bonjar MR

    2015-01-01

    Full Text Available Mohammad Rashid Shahidi Bonjar,1 Leyla Shahidi Bonjar2 1School of Dentistry, Kerman University of Medical Sciences, Kerman Iran; 2Department of Pharmacology, College of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran Abstract: This hypothesis proposes a new prospective approach to slow the aging process in older humans. The hypothesis could lead to developing new treatments for age-related illnesses and help humans to live longer. This hypothesis has no previous documentation in scientific media and has no protocol. Scientists have presented evidence that systemic aging is influenced by peculiar molecules in the blood. Researchers at Albert Einstein College of Medicine, New York, and Harvard University in Cambridge discovered elevated titer of aging-related molecules (ARMs in blood, which trigger cascade of aging process in mice; they also indicated that the process can be reduced or even reversed. By inhibiting the production of ARMs, they could reduce age-related cognitive and physical declines. The present hypothesis offers a new approach to translate these findings into medical treatment: extracorporeal adjustment of ARMs would lead to slower rates of aging. A prospective “antiaging blood filtration column” (AABFC is a nanotechnological device that would fulfill the central role in this approach. An AABFC would set a near-youth homeostatic titer of ARMs in the blood. In this regard, the AABFC immobilizes ARMs from the blood while blood passes through the column. The AABFC harbors antibodies against ARMs. ARM antibodies would be conjugated irreversibly to ARMs on contact surfaces of the reaction platforms inside the AABFC till near-youth homeostasis is attained. The treatment is performed with the aid of a blood-circulating pump. Similar to a renal dialysis machine, blood would circulate from the body to the AABFC and from there back to the body in a closed circuit until ARMs were sufficiently depleted from the blood. The

  13. [Dilemma of null hypothesis in ecological hypothesis's experiment test.

    Science.gov (United States)

    Li, Ji

    2016-06-01

    Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.

  14. The Kolmogorov-Smirnov test for three redshift distributions of long gamma-ray bursts in the Swift Era

    International Nuclear Information System (INIS)

    Dong Yunming; Lu Tan

    2009-01-01

    We investigate redshift distributions of three long burst samples, with the first sample containing 131 long bursts with observed redshifts, the second including 220 long bursts with pseudo-redshifts calculated by the variability-luminosity relation, and the third including 1194 long bursts with pseudo-redshifts calculated by the lag-luminosity relation, respectively. In the redshift range 0-1 the Kolmogorov-Smirnov probability of the observed redshift distribution and that of the variability-luminosity relation is large. In the redshift ranges 1-2, 2-3, 3-6.3 and 0-37, the Kolmogorov-Smirnov probabilities of the redshift distribution from lag-luminosity relation and the observed redshift distribution are also large. For the GRBs, which appear both in the two pseudo-redshift burst samples, the KS probability of the pseudo-redshift distribution from the lag-luminosity relation and the observed reshift distribution is 0.447, which is very large. Based on these results, some conclusions are drawn: i) the V-L iso relation might be more believable than the τ-L iso relation in low redshift ranges and the τ-L iso relation might be more real than the V-Liso relation in high redshift ranges; ii) if we do not consider the redshift ranges, the τ-L iso relation might be more physical and intrinsical than the V-L i so relation. (research papers)

  15. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  16. Channel correlation of free space optical communication systems with receiver diversity in non-Kolmogorov atmospheric turbulence

    Science.gov (United States)

    Ma, Jing; Fu, Yulong; Tan, Liying; Yu, Siyuan; Xie, Xiaolong

    2018-05-01

    Spatial diversity as an effective technique to mitigate the turbulence fading has been widely utilized in free space optical (FSO) communication systems. The received signals, however, will suffer from channel correlation due to insufficient spacing between component antennas. In this paper, the new expressions of the channel correlation coefficient and specifically its components (the large- and small-scale channel correlation coefficients) for a plane wave with aperture effects are derived for horizontal link in moderate-to-strong turbulence, using a non-Kolmogorov spectrum that has a generalized power law in the range of 3-4 instead of the fixed classical Kolmogorov power law of 11/3. And then the influence of power law variations on the channel correlation coefficient and its components are analysed. The numerical results indicated that various value of the power law lead to varying effects on the channel correlation coefficient and its components. This work will help with the further investigation on the fading correlation in spatial diversity systems.

  17. Effect of current sheets on the solar wind magnetic field power spectrum from the Ulysses observation: from Kraichnan to Kolmogorov scaling.

    Science.gov (United States)

    Li, G; Miao, B; Hu, Q; Qin, G

    2011-03-25

    The MHD turbulence theory developed by Iroshnikov and Kraichnan predicts a k(-1.5) power spectrum. Solar wind observations, however, often show a k(-5/3) Kolmogorov scaling. Based on 3 years worth of Ulysses magnetic field data where over 28,000 current sheets are identified, we propose that the current sheet is the cause of the Kolmogorov scaling. We show that for 5 longest current-sheet-free periods the magnetic field power spectra are all described by the Iroshnikov-Kraichnan scaling. In comparison, for 5 periods that have the most number of current sheets, the power spectra all exhibit Kolmogorov scaling. The implication of our results is discussed.

  18. Further analysis of scintillation index for a laser beam propagating through moderate-to-strong non-Kolmogorov turbulence based on generalized effective atmospheric spectral model

    Science.gov (United States)

    Ma, Jing; Fu, Yu-Long; Yu, Si-Yuan; Xie, Xiao-Long; Tan, Li-Ying

    2018-03-01

    A new expression of the scintillation index (SI) for a Gaussian-beam wave propagating through moderate-to-strong non-Kolmogorov turbulence is derived, using a generalized effective atmospheric spectrum and the extended Rytov approximation theory. Finite inner and outer scale parameters and high wave number “bump” are considered in the spectrum with a generalized spectral power law in the range of 3–4, instead of the fixed classical Kolmogorov power law of 11/3. The obtained SI expression is then used to analyze the effects of the spectral power law and the inner scale and outer scale on SI under various non-Kolmogorov fluctuation conditions. These results will be useful in future investigations of optical wave propagation through atmospheric turbulence.

  19. Scintillation index and performance analysis of wireless optical links over non-Kolmogorov weak turbulence based on generalized atmospheric spectral model.

    Science.gov (United States)

    Cang, Ji; Liu, Xu

    2011-09-26

    Based on the generalized spectral model for non-Kolmogorov atmospheric turbulence, analytic expressions of the scintillation index (SI) are derived for plane, spherical optical waves and a partially coherent Gaussian beam propagating through non-Kolmogorov turbulence horizontally in the weak fluctuation regime. The new expressions relate the SI to the finite turbulence inner and outer scales, spatial coherence of the source and spectral power-law and then used to analyze the effects of atmospheric condition and link length on the performance of wireless optical communication links. © 2011 Optical Society of America

  20. Electron acceleration by an obliquely propagating electromagnetic wave in the regime of validity of the Fokker-Planck-Kolmogorov approach

    Science.gov (United States)

    Hizanidis, Kyriakos; Vlahos, L.; Polymilis, C.

    1989-01-01

    The relativistic motion of an ensemble of electrons in an intense monochromatic electromagnetic wave propagating obliquely in a uniform external magnetic field is studied. The problem is formulated from the viewpoint of Hamiltonian theory and the Fokker-Planck-Kolmogorov approach analyzed by Hizanidis (1989), leading to a one-dimensional diffusive acceleration along paths of constant zeroth-order generalized Hamiltonian. For values of the wave amplitude and the propagating angle inside the analytically predicted stochastic region, the numerical results suggest that the diffusion probes proceeds in stages. In the first stage, the electrons are accelerated to relatively high energies by sampling the first few overlapping resonances one by one. During that stage, the ensemble-average square deviation of the variable involved scales quadratically with time. During the second stage, they scale linearly with time. For much longer times, deviation from linear scaling slowly sets in.

  1. A Kolmogorov-type competition model with multiple coexistence states and its applications to plant competition for sunlight

    Science.gov (United States)

    Just, Winfried; Nevai, Andrew L.

    2008-12-01

    It is demonstrated that a Kolmogorov-type competition model featuring species allocation and gain functions can possess multiple coexistence statesE Two examples are constructed: one in which the two competing species possess rectangular allocation functions but distinct gain functions, and the other in which one species has a rectangular allocation function, the second species has a bi-rectangular allocation function, and the two species share a common gain function. In both examples, it is shown that the species nullclines may intersect multiple times within the interior of the first quadrant, thus creating both locally stable and unstable equilibrium points. These results have important applications in the study of plant competition for sunlight, in which the allocation functions describe the vertical placement of leaves for two competing species, and the gain functions represent rates of photosynthesis performed by leaves at different heights when shaded by overlying leaves belonging to either species.

  2. The analysis and application of a new hybrid pollutants forecasting model using modified Kolmogorov-Zurbenko filter.

    Science.gov (United States)

    Li, Peizhi; Wang, Yong; Dong, Qingli

    2017-04-01

    Cities in China suffer from severe smog and haze, and a forecasting system with high accuracy is of great importance to foresee the concentrations of the airborne particles. Compared with chemical transport models, the growing artificial intelligence models can simulate nonlinearities and interactive relationships and getting more accurate results. In this paper, the Kolmogorov-Zurbenko (KZ) filter is modified and firstly applied to construct the model using an artificial intelligence method. The concentration of inhalable particles and fine particulate matter in Dalian are used to analyze the filtered components and test the forecasting accuracy. Besides, an extended experiment is made by implementing a comprehensive comparison and a stability test using data in three other cities in China. Results testify the excellent performance of the developed hybrid models, which can be utilized to better understand the temporal features of pollutants and to perform a better air pollution control and management. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Perceptions of ideal and former partner's personality and similarity

    NARCIS (Netherlands)

    Dijkstra, Pieternel; Barelds, Dick P.H.

    2010-01-01

    The present study aimed to test predictions based on both the ‗similarity-attraction‘ hypothesis and the ‗attraction-similarity‘ hypothesis, by studying perceptions of ideal and former partners. Based on the ‗similarity-attraction‘ hypothesis, we expected individuals to desire ideal partners who are

  4. On self-similarity of crack layer

    Science.gov (United States)

    Botsis, J.; Kunin, B.

    1987-01-01

    The crack layer (CL) theory of Chudnovsky (1986), based on principles of thermodynamics of irreversible processes, employs a crucial hypothesis of self-similarity. The self-similarity hypothesis states that the value of the damage density at a point x of the active zone at a time t coincides with that at the corresponding point in the initial (t = 0) configuration of the active zone, the correspondence being given by a time-dependent affine transformation of the space variables. In this paper, the implications of the self-similarity hypothesis for qusi-static CL propagation is investigated using polystyrene as a model material and examining the evolution of damage distribution along the trailing edge which is approximated by a straight segment perpendicular to the crack path. The results support the self-similarity hypothesis adopted by the CL theory.

  5. Is the Aluminum Hypothesis Dead?

    Science.gov (United States)

    2014-01-01

    The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed with concern by some of the public. This review article discusses reasons that mainstream science has largely abandoned the Aluminum Hypothesis and explores a possible reason for some in the general public continuing to view aluminum with mistrust. PMID:24806729

  6. Memory in astrocytes: a hypothesis

    Directory of Open Access Journals (Sweden)

    Caudle Robert M

    2006-01-01

    Full Text Available Abstract Background Recent work has indicated an increasingly complex role for astrocytes in the central nervous system. Astrocytes are now known to exchange information with neurons at synaptic junctions and to alter the information processing capabilities of the neurons. As an extension of this trend a hypothesis was proposed that astrocytes function to store information. To explore this idea the ion channels in biological membranes were compared to models known as cellular automata. These comparisons were made to test the hypothesis that ion channels in the membranes of astrocytes form a dynamic information storage device. Results Two dimensional cellular automata were found to behave similarly to ion channels in a membrane when they function at the boundary between order and chaos. The length of time information is stored in this class of cellular automata is exponentially related to the number of units. Therefore the length of time biological ion channels store information was plotted versus the estimated number of ion channels in the tissue. This analysis indicates that there is an exponential relationship between memory and the number of ion channels. Extrapolation of this relationship to the estimated number of ion channels in the astrocytes of a human brain indicates that memory can be stored in this system for an entire life span. Interestingly, this information is not affixed to any physical structure, but is stored as an organization of the activity of the ion channels. Further analysis of two dimensional cellular automata also demonstrates that these systems have both associative and temporal memory capabilities. Conclusion It is concluded that astrocytes may serve as a dynamic information sink for neurons. The memory in the astrocytes is stored by organizing the activity of ion channels and is not associated with a physical location such as a synapse. In order for this form of memory to be of significant duration it is necessary

  7. Hypothesis Designs for Three-Hypothesis Test Problems

    OpenAIRE

    Yan Li; Xiaolong Pu

    2010-01-01

    As a helpful guide for applications, the alternative hypotheses of the three-hypothesis test problems are designed under the required error probabilities and average sample number in this paper. The asymptotic formulas and the proposed numerical quadrature formulas are adopted, respectively, to obtain the hypothesis designs and the corresponding sequential test schemes under the Koopman-Darmois distributions. The example of the normal mean test shows that our methods are qu...

  8. Tests of the lunar hypothesis

    Science.gov (United States)

    Taylor, S. R.

    1984-01-01

    The concept that the Moon was fissioned from the Earth after core separation is the most readily testable hypothesis of lunar origin, since direct comparisons of lunar and terrestrial compositions can be made. Differences found in such comparisons introduce so many ad hoc adjustments to the fission hypothesis that it becomes untestable. Further constraints may be obtained from attempting to date the volatile-refractory element fractionation. The combination of chemical and isotopic problems suggests that the fission hypothesis is no longer viable, and separate terrestrial and lunar accretion from a population of fractionated precursor planetesimals provides a more reasonable explanation.

  9. Evaluating the Stage Learning Hypothesis.

    Science.gov (United States)

    Thomas, Hoben

    1980-01-01

    A procedure for evaluating the Genevan stage learning hypothesis is illustrated by analyzing Inhelder, Sinclair, and Bovet's guided learning experiments (in "Learning and the Development of Cognition." Cambridge: Harvard University Press, 1974). (Author/MP)

  10. The Purchasing Power Parity Hypothesis:

    African Journals Online (AJOL)

    2011-10-02

    Oct 2, 2011 ... reject the unit root hypothesis in real exchange rates may simply be due to the shortness ..... Violations of Purchasing Power Parity and Their Implications for Efficient ... Official Intervention in the Foreign Exchange Market:.

  11. A test of the orthographic recoding hypothesis

    Science.gov (United States)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  12. A Dopamine Hypothesis of Autism Spectrum Disorder.

    Science.gov (United States)

    Pavăl, Denis

    2017-01-01

    Autism spectrum disorder (ASD) comprises a group of neurodevelopmental disorders characterized by social deficits and stereotyped behaviors. While several theories have emerged, the pathogenesis of ASD remains unknown. Although studies report dopamine signaling abnormalities in autistic patients, a coherent dopamine hypothesis which could link neurobiology to behavior in ASD is currently lacking. In this paper, we present such a hypothesis by proposing that autistic behavior arises from dysfunctions in the midbrain dopaminergic system. We hypothesize that a dysfunction of the mesocorticolimbic circuit leads to social deficits, while a dysfunction of the nigrostriatal circuit leads to stereotyped behaviors. Furthermore, we discuss 2 key predictions of our hypothesis, with emphasis on clinical and therapeutic aspects. First, we argue that dopaminergic dysfunctions in the same circuits should associate with autistic-like behavior in nonautistic subjects. Concerning this, we discuss the case of PANDAS (pediatric autoimmune neuropsychiatric disorder associated with streptococcal infections) which displays behaviors similar to those of ASD, presumed to arise from dopaminergic dysfunctions. Second, we argue that providing dopamine modulators to autistic subjects should lead to a behavioral improvement. Regarding this, we present clinical studies of dopamine antagonists which seem to have improving effects on autistic behavior. Furthermore, we explore the means of testing our hypothesis by using neuroreceptor imaging, which could provide comprehensive evidence for dopamine signaling dysfunctions in autistic subjects. Lastly, we discuss the limitations of our hypothesis. Along these lines, we aim to provide a dopaminergic model of ASD which might lead to a better understanding of the ASD pathogenesis. © 2017 S. Karger AG, Basel.

  13. The atomic hypothesis: physical consequences

    International Nuclear Information System (INIS)

    Rivas, Martin

    2008-01-01

    The hypothesis that matter is made of some ultimate and indivisible objects, together with the restricted relativity principle, establishes a constraint on the kind of variables we are allowed to use for the variational description of elementary particles. We consider that the atomic hypothesis not only states the indivisibility of elementary particles, but also that these ultimate objects, if not annihilated, cannot be modified by any interaction so that all allowed states of an elementary particle are only kinematical modifications of any one of them. Therefore, an elementary particle cannot have excited states. In this way, the kinematical group of spacetime symmetries not only defines the symmetries of the system, but also the variables in terms of which the mathematical description of the elementary particles can be expressed in either the classical or the quantum mechanical description. When considering the interaction of two Dirac particles, the atomic hypothesis restricts the interaction Lagrangian to a kind of minimal coupling interaction

  14. Multiple sclerosis: a geographical hypothesis.

    Science.gov (United States)

    Carlyle, I P

    1997-12-01

    Multiple sclerosis remains a rare neurological disease of unknown aetiology, with a unique distribution, both geographically and historically. Rare in equatorial regions, it becomes increasingly common in higher latitudes; historically, it was first clinically recognized in the early nineteenth century. A hypothesis, based on geographical reasoning, is here proposed: that the disease is the result of a specific vitamin deficiency. Different individuals suffer the deficiency in separate and often unique ways. Evidence to support the hypothesis exists in cultural considerations, in the global distribution of the disease, and in its historical prevalence.

  15. Discussion of the Porter hypothesis

    International Nuclear Information System (INIS)

    1999-11-01

    In the reaction to the long-range vision of RMNO, published in 1996, The Dutch government posed the question whether a far-going and progressive modernization policy will lead to competitive advantages of high-quality products on partly new markets. Such a question is connected to the so-called Porter hypothesis: 'By stimulating innovation, strict environmental regulations can actually enhance competitiveness', from which statement it can be concluded that environment and economy can work together quite well. A literature study has been carried out in order to determine under which conditions that hypothesis is endorsed in the scientific literature and policy documents. Recommendations are given for further studies. refs

  16. The thrifty phenotype hypothesis revisited

    DEFF Research Database (Denmark)

    Vaag, A A; Grunnet, L G; Arora, G P

    2012-01-01

    Twenty years ago, Hales and Barker along with their co-workers published some of their pioneering papers proposing the 'thrifty phenotype hypothesis' in Diabetologia (4;35:595-601 and 3;36:62-67). Their postulate that fetal programming could represent an important player in the origin of type 2...... of the underlying molecular mechanisms. Type 2 diabetes is a multiple-organ disease, and developmental programming, with its idea of organ plasticity, is a plausible hypothesis for a common basis for the widespread organ dysfunctions in type 2 diabetes and the metabolic syndrome. Only two among the 45 known type 2...

  17. A Kolmogorov-Brutsaert Structure Function Model for Evaporation from a Rough Surface into a Turbulent Atmosphere

    Science.gov (United States)

    Katul, Gabriel; Liu, Heping

    2017-04-01

    In his 1881 acceptance letter of the Rumford Medal, Gibbs declared that "One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity". Guided by this quotation, the subject of evaporation into the atmosphere from rough surfaces by turbulence offered in a 1965 study by Brutsaert is re-examined. Brutsaert proposed a model that predicted mean evaporation rate E from rough surfaces to scale with the 3/4 power-law of the friction velocity (u∗) and the square-root of molecular diffusivity (Dm) for water vapor. This result was supported by a large corpus of experiments and spawned a number of studies on inter-facial transfer of scalars, evaporation from porous media at single and multiple pore scales, bulk evaporation from bare soil surfaces, as well as isotopic fractionation in hydrological applications. It also correctly foreshadowed the much discussed 1/4 'universal' scaling of liquid transfer coefficients of sparingly soluble gases in air-sea exchange studies. In arriving at these results, a number of assumptions were made regarding the surface renewal rate describing the contact durations between eddies and the evaporating surface, the diffusional mass process from the surface into eddies, and the cascade of turbulent kinetic energy sustaining the eddy renewal process itself. The anzats explored here is that E ˜√Dm-u∗3/4 is a direct outcome of the Kolmogorov scaling for inertial subrange eddies modified to include viscous-cutoff thereby by-passing the need for a surface renewal assumption. It is demonstrated that Brutsaert's model for E may be more general than its original derivation assumed. Extensions to canopy surfaces as well as other scalars with different molecular Schmidt numbers are also featured.

  18. Andrei Nikolaevich Kolmogorov1 ...

    Indian Academy of Sciences (India)

    His life was one of intense mathematical creativity spanning six and a half decades and ... around which the entire edifice of statistical theory and computation is ... applications in areas like the theory of dams and collective risk theory. In 1931 ...

  19. The (not so) immortal strand hypothesis.

    Science.gov (United States)

    Tomasetti, Cristian; Bozic, Ivana

    2015-03-01

    Non-random segregation of DNA strands during stem cell replication has been proposed as a mechanism to minimize accumulated genetic errors in stem cells of rapidly dividing tissues. According to this hypothesis, an "immortal" DNA strand is passed to the stem cell daughter and not the more differentiated cell, keeping the stem cell lineage replication error-free. After it was introduced, experimental evidence both in favor and against the hypothesis has been presented. Using a novel methodology that utilizes cancer sequencing data we are able to estimate the rate of accumulation of mutations in healthy stem cells of the colon, blood and head and neck tissues. We find that in these tissues mutations in stem cells accumulate at rates strikingly similar to those expected without the protection from the immortal strand mechanism. Utilizing an approach that is fundamentally different from previous efforts to confirm or refute the immortal strand hypothesis, we provide evidence against non-random segregation of DNA during stem cell replication. Our results strongly suggest that parental DNA is passed randomly to stem cell daughters and provides new insight into the mechanism of DNA replication in stem cells. Copyright © 2015. Published by Elsevier B.V.

  20. Whiplash and the compensation hypothesis.

    Science.gov (United States)

    Spearing, Natalie M; Connelly, Luke B

    2011-12-01

    Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.

  1. New Similarity Functions

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Kwasnicka, Halina

    2016-01-01

    spaces, in addition to their similarity in the vector space. Prioritized Weighted Feature Distance (PWFD) works similarly as WFD, but provides the ability to give priorities to desirable features. The accuracy of the proposed functions are compared with other similarity functions on several data sets....... Our results show that the proposed functions work better than other methods proposed in the literature....

  2. Phoneme Similarity and Confusability

    Science.gov (United States)

    Bailey, T.M.; Hahn, U.

    2005-01-01

    Similarity between component speech sounds influences language processing in numerous ways. Explanation and detailed prediction of linguistic performance consequently requires an understanding of these basic similarities. The research reported in this paper contrasts two broad classes of approach to the issue of phoneme similarity-theoretically…

  3. Bilateral Trade Flows and Income Distribution Similarity

    Science.gov (United States)

    2016-01-01

    Current models of bilateral trade neglect the effects of income distribution. This paper addresses the issue by accounting for non-homothetic consumer preferences and hence investigating the role of income distribution in the context of the gravity model of trade. A theoretically justified gravity model is estimated for disaggregated trade data (Dollar volume is used as dependent variable) using a sample of 104 exporters and 108 importers for 1980–2003 to achieve two main goals. We define and calculate new measures of income distribution similarity and empirically confirm that greater similarity of income distribution between countries implies more trade. Using distribution-based measures as a proxy for demand similarities in gravity models, we find consistent and robust support for the hypothesis that countries with more similar income-distributions trade more with each other. The hypothesis is also confirmed at disaggregated level for differentiated product categories. PMID:27137462

  4. Cell-size distribution and scaling in a one-dimensional Kolmogorov-Johnson-Mehl-Avrami lattice model with continuous nucleation

    Science.gov (United States)

    Néda, Zoltán; Járai-Szabó, Ferenc; Boda, Szilárd

    2017-10-01

    The Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth model is considered on a one-dimensional (1D) lattice. Cells can grow with constant speed and continuously nucleate on the empty sites. We offer an alternative mean-field-like approach for describing theoretically the dynamics and derive an analytical cell-size distribution function. Our method reproduces the same scaling laws as the KJMA theory and has the advantage that it leads to a simple closed form for the cell-size distribution function. It is shown that a Weibull distribution is appropriate for describing the final cell-size distribution. The results are discussed in comparison with Monte Carlo simulation data.

  5. Application of Kolmogorov chain process theory to the case of reactors of several coupled zones (reflex reactors, reactors with two multiplying zones)

    International Nuclear Information System (INIS)

    Chikouche, M.; Haldy, P.A.

    1976-01-01

    The general theory of chain processes of Kolmogorov and Dmitriev can be used to obtain the expression for the generating function f for the probability distribution of the number of incidences recorded in a given sequence of disjoint time intervals. From this f it is possible to extract a theoretical formulation for most of methods of temporal analysis of neutron noise ('invariance-to-mean' Alpha Rossi I and II, interval distribution, etc.). This theory is extended to multiple coupled zone cases. (F.Q.)

  6. Is PMI the Hypothesis or the Null Hypothesis?

    Science.gov (United States)

    Tarone, Aaron M; Sanford, Michelle R

    2017-09-01

    Over the past several decades, there have been several strident exchanges regarding whether forensic entomologists estimate the postmortem interval (PMI), minimum PMI, or something else. During that time, there has been a proliferation of terminology reflecting this concern regarding "what we do." This has been a frustrating conversation for some in the community because much of this debate appears to be centered on what assumptions are acknowledged directly and which are embedded within a list of assumptions (or ignored altogether) in the literature and in case reports. An additional component of the conversation centers on a concern that moving away from the use of certain terminology like PMI acknowledges limitations and problems that would make the application of entomology appear less useful in court-a problem for lawyers, but one that should not be problematic for scientists in the forensic entomology community, as uncertainty is part of science that should and can be presented effectively in the courtroom (e.g., population genetic concepts in forensics). Unfortunately, a consequence of the way this conversation is conducted is that even as all involved in the debate acknowledge the concerns of their colleagues, parties continue to talk past one another advocating their preferred terminology. Progress will not be made until the community recognizes that all of the terms under consideration take the form of null hypothesis statements and that thinking about "what we do" as a null hypothesis has useful legal and scientific ramifications that transcend arguments over the usage of preferred terminology. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. The Stoichiometric Divisome: A Hypothesis

    Directory of Open Access Journals (Sweden)

    Waldemar eVollmer

    2015-05-01

    Full Text Available Dividing Escherichia coli cells simultaneously constrict the inner membrane, peptidoglycan layer and outer membrane to synthesize the new poles of the daughter cells. For this, more than 30 proteins localize to mid-cell where they form a large, ring-like assembly, the divisome, facilitating division. Although the precise function of most divisome proteins is unknown, it became apparent in recent years that dynamic protein-protein interactions are essential for divisome assembly and function. However, little is known about the nature of the interactions involved and the stoichiometry of the proteins within the divisome. A recent study (Li et al., 2014 used ribosome profiling to measure the absolute protein synthesis rates in E. coli. Interestingly, they observed that most proteins which participate in known multiprotein complexes are synthesized proportional to their stoichiometry. Based on this principle we present a hypothesis for the stoichiometry of the core of the divisome, taking into account known protein-protein interactions. From this hypothesis we infer a possible mechanism for PG synthesis during division.

  8. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  9. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  10. Processes of Similarity Judgment

    Science.gov (United States)

    Larkey, Levi B.; Markman, Arthur B.

    2005-01-01

    Similarity underlies fundamental cognitive capabilities such as memory, categorization, decision making, problem solving, and reasoning. Although recent approaches to similarity appreciate the structure of mental representations, they differ in the processes posited to operate over these representations. We present an experiment that…

  11. Judgments of brand similarity

    NARCIS (Netherlands)

    Bijmolt, THA; Wedel, M; Pieters, RGM; DeSarbo, WS

    This paper provides empirical insight into the way consumers make pairwise similarity judgments between brands, and how familiarity with the brands, serial position of the pair in a sequence, and the presentation format affect these judgments. Within the similarity judgment process both the

  12. Athlete's Heart: Is the Morganroth Hypothesis Obsolete?

    Science.gov (United States)

    Haykowsky, Mark J; Samuel, T Jake; Nelson, Michael D; La Gerche, Andre

    2018-05-01

    In 1975, Morganroth and colleagues reported that the increased left ventricular (LV) mass in highly trained endurance athletes versus nonathletes was primarily due to increased end-diastolic volume while the increased LV mass in resistance trained athletes was solely due to an increased LV wall thickness. Based on the divergent remodelling patterns observed, Morganroth and colleagues hypothesised that the increased "volume" load during endurance exercise may be similar to that which occurs in patients with mitral or aortic regurgitation while the "pressure" load associated with performing a Valsalva manoeuvre (VM) during resistance exercise may mimic the stress imposed on the heart by systemic hypertension or aortic stenosis. Despite widespread acceptance of the four-decade old Morganroth hypothesis in sports cardiology, some investigators have questioned whether such a divergent "athlete's heart" phenotype exists. Given this uncertainty, the purpose of this brief review is to re-evaluate the Morganroth hypothesis regarding: i) the acute effects of resistance exercise performed with a brief VM on LV wall stress, and the patterns of LV remodelling in resistance-trained athletes; ii) the acute effects of endurance exercise on biventricular wall stress, and the time course and pattern of LV and right ventricular (RV) remodelling with endurance training; and iii) the value of comparing "loading" conditions between athletes and patients with cardiac pathology. Copyright © 2018. Published by Elsevier B.V.

  13. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  14. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  15. The venom optimization hypothesis revisited.

    Science.gov (United States)

    Morgenstern, David; King, Glenn F

    2013-03-01

    Animal venoms are complex chemical mixtures that typically contain hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. However, because venoms are protein-rich, they come with a high metabolic price tag. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. The high metabolic cost of venom leads to the prediction that venomous animals may have evolved strategies for minimizing venom expenditure. Indeed, various behaviors have been identified that appear consistent with frugality of venom use. This has led to formulation of the "venom optimization hypothesis" (Wigger et al. (2002) Toxicon 40, 749-752), also known as "venom metering", which postulates that venom is metabolically expensive and therefore used frugally through behavioral control. Here, we review the available data concerning economy of venom use by animals with either ancient or more recently evolved venom systems. We conclude that the convergent nature of the evidence in multiple taxa strongly suggests the existence of evolutionary pressures favoring frugal use of venom. However, there remains an unresolved dichotomy between this economy of venom use and the lavish biochemical complexity of venom, which includes a high degree of functional redundancy. We discuss the evidence for biochemical optimization of venom as a means of resolving this conundrum. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Alien abduction: a medical hypothesis.

    Science.gov (United States)

    Forrest, David V

    2008-01-01

    In response to a new psychological study of persons who believe they have been abducted by space aliens that found that sleep paralysis, a history of being hypnotized, and preoccupation with the paranormal and extraterrestrial were predisposing experiences, I noted that many of the frequently reported particulars of the abduction experience bear more than a passing resemblance to medical-surgical procedures and propose that experience with these may also be contributory. There is the altered state of consciousness, uniformly colored figures with prominent eyes, in a high-tech room under a round bright saucerlike object; there is nakedness, pain and a loss of control while the body's boundaries are being probed; and yet the figures are thought benevolent. No medical-surgical history was apparently taken in the above mentioned study, but psychological laboratory work evaluated false memory formation. I discuss problems in assessing intraoperative awareness and ways in which the medical hypothesis could be elaborated and tested. If physicians are causing this syndrome in a percentage of patients, we should know about it; and persons who feel they have been abducted should be encouraged to inform their surgeons and anesthesiologists without challenging their beliefs.

  17. The oxidative hypothesis of senescence

    Directory of Open Access Journals (Sweden)

    Gilca M

    2007-01-01

    Full Text Available The oxidative hypothesis of senescence, since its origin in 1956, has garnered significant evidence and growing support among scientists for the notion that free radicals play an important role in ageing, either as "damaging" molecules or as signaling molecules. Age-increasing oxidative injuries induced by free radicals, higher susceptibility to oxidative stress in short-lived organisms, genetic manipulations that alter both oxidative resistance and longevity and the anti-ageing effect of caloric restriction and intermittent fasting are a few examples of accepted scientific facts that support the oxidative theory of senescence. Though not completely understood due to the complex "network" of redox regulatory systems, the implication of oxidative stress in the ageing process is now well documented. Moreover, it is compatible with other current ageing theories (e.g., those implicating the mitochondrial damage/mitochondrial-lysosomal axis, stress-induced premature senescence, biological "garbage" accumulation, etc. This review is intended to summarize and critically discuss the redox mechanisms involved during the ageing process: sources of oxidant agents in ageing (mitochondrial -electron transport chain, nitric oxide synthase reaction- and non-mitochondrial- Fenton reaction, microsomal cytochrome P450 enzymes, peroxisomal β -oxidation and respiratory burst of phagocytic cells, antioxidant changes in ageing (enzymatic- superoxide dismutase, glutathione-reductase, glutathion peroxidase, catalase- and non-enzymatic glutathione, ascorbate, urate, bilirubine, melatonin, tocopherols, carotenoids, ubiquinol, alteration of oxidative damage repairing mechanisms and the role of free radicals as signaling molecules in ageing.

  18. Evaluating gender similarities and differences using metasynthesis.

    Science.gov (United States)

    Zell, Ethan; Krizan, Zlatan; Teeter, Sabrina R

    2015-01-01

    Despite the common lay assumption that males and females are profoundly different, Hyde (2005) used data from 46 meta-analyses to demonstrate that males and females are highly similar. Nonetheless, the gender similarities hypothesis has remained controversial. Since Hyde's provocative report, there has been an explosion of meta-analytic interest in psychological gender differences. We utilized this enormous collection of 106 meta-analyses and 386 individual meta-analytic effects to reevaluate the gender similarities hypothesis. Furthermore, we employed a novel data-analytic approach called metasynthesis (Zell & Krizan, 2014) to estimate the average difference between males and females and to explore moderators of gender differences. The average, absolute difference between males and females across domains was relatively small (d = 0.21, SD = 0.14), with the majority of effects being either small (46%) or very small (39%). Magnitude of differences fluctuated somewhat as a function of the psychological domain (e.g., cognitive variables, social and personality variables, well-being), but remained largely constant across age, culture, and generations. These findings provide compelling support for the gender similarities hypothesis, but also underscore conditions under which gender differences are most pronounced. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  19. Similarity or difference?

    DEFF Research Database (Denmark)

    Villadsen, Anders Ryom

    2013-01-01

    While the organizational structures and strategies of public organizations have attracted substantial research attention among public management scholars, little research has explored how these organizational core dimensions are interconnected and influenced by pressures for similarity....... In this paper I address this topic by exploring the relation between expenditure strategy isomorphism and structure isomorphism in Danish municipalities. Different literatures suggest that organizations exist in concurrent pressures for being similar to and different from other organizations in their field......-shaped relation exists between expenditure strategy isomorphism and structure isomorphism in a longitudinal quantitative study of Danish municipalities....

  20. Comparing Harmonic Similarity Measures

    NARCIS (Netherlands)

    de Haas, W.B.; Robine, M.; Hanna, P.; Veltkamp, R.C.; Wiering, F.

    2010-01-01

    We present an overview of the most recent developments in polyphonic music retrieval and an experiment in which we compare two harmonic similarity measures. In contrast to earlier work, in this paper we specifically focus on the symbolic chord description as the primary musical representation and

  1. Perceptions of Ideal and Former Partners’ Personality and Similarity

    Directory of Open Access Journals (Sweden)

    Pieternel Dijkstra

    2010-12-01

    Full Text Available The present study aimed to test predictions based on both the ‗similarity-attraction‘ hypothesis and the ‗attraction-similarity‘ hypothesis, by studying perceptions of ideal and former partners. Based on the ‗similarity-attraction‘ hypothesis, we expected individuals to desire ideal partners who are similar to the self in personality. In addition, based on the ‗attraction-similarity hypothesis‘, we expected individuals to perceive former partners as dissimilar to them in terms of personality. Findings showed that, whereas the ideal partner was seen as similar to and more positive than the self, the former partner was seen as dissimilar to and more negative than the self. In addition, our study showed that individuals did not rate similarity in personality as very important when seeking a mate. Our findings may help understand why so many relationships end in divorce due to mismatches in personality.

  2. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji

    2016-07-14

    Self-similar processes have been widely used in modeling real-world phenomena occurring in environmetrics, network traffic, image processing, and stock pricing, to name but a few. The estimation of the degree of self-similarity has been studied extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi-self-similarity for a random field indexed in higher dimensions. If self-similarity is not rejected, our test provides a set of estimated self-similarity indexes. The key is to test stationarity of the inverse Lamperti transformations of the process. The inverse Lamperti transformation of a self-similar process is a strongly stationary process, revealing a theoretical connection between the two processes. To demonstrate the capability of our test, we test self-similarity of fractional Brownian motions and sheets, their time deformations and mixtures with Gaussian white noise, and the generalized Cauchy family. We also apply the self-similarity test to real data: annual minimum water levels of the Nile River, network traffic records, and surface heights of food wrappings. © 2016, International Biometric Society.

  3. PAF receptor structure: a hypothesis.

    Science.gov (United States)

    Godfroid, J J; Dive, G; Lamotte-Brasseur, J; Batt, J P; Heymans, F

    1991-12-01

    Different hypotheses of the structure of platelet-activating factor (PAF) receptor based on structure-activity relationships of agonists and antagonists are reviewed. For an agonistic effect, strong hydrophobic interactions and an ether function are required in position-1 of the glycerol backbone; chain length limitations and steric hindrance demand a small group in position-2. The unusual structural properties of non-PAF-like antagonists required 3-D electrostatic potential calculations. This method applied to seven potent antagonists suggests a strong "Cache-orielles" (ear-muff) effect, i.e., two strong electronegative wells (isocontour at -10 Kcal/mole) are located at 180 degrees to each other and at a relatively constant distance. Initial consideration of the "Cache-oreilles" effect implied the structure of a bipolarized cylinder of 10-12 A diameter for the receptor. However, very recent results on studies with agonists and antagonists structurally similar to PAF suggest that the receptor may in fact be a multi-polarized cylinder.

  4. Vascular Gene Expression: A Hypothesis

    Directory of Open Access Journals (Sweden)

    Angélica Concepción eMartínez-Navarro

    2013-07-01

    Full Text Available The phloem is the conduit through which photoassimilates are distributed from autotrophic to heterotrophic tissues and is involved in the distribution of signaling molecules that coordinate plant growth and responses to the environment. Phloem function depends on the coordinate expression of a large array of genes. We have previously identified conserved motifs in upstream regions of the Arabidopsis genes, encoding the homologs of pumpkin phloem sap mRNAs, displaying expression in vascular tissues. This tissue-specific expression in Arabidopsis is predicted by the overrepresentation of GA/CT-rich motifs in gene promoters. In this work we have searched for common motifs in upstream regions of the homologous genes from plants considered to possess a primitive vascular tissue (a lycophyte, as well as from others that lack a true vascular tissue (a bryophyte, and finally from chlorophytes. Both lycophyte and bryophyte display motifs similar to those found in Arabidopsis with a significantly low E-value, while the chlorophytes showed either a different conserved motif or no conserved motif at all. These results suggest that these same genes are expressed coordinately in non- vascular plants; this coordinate expression may have been one of the prerequisites for the development of conducting tissues in plants. We have also analyzed the phylogeny of conserved proteins that may be involved in phloem function and development. The presence of CmPP16, APL, FT and YDA in chlorophytes suggests the recruitment of ancient regulatory networks for the development of the vascular tissue during evolution while OPS is a novel protein specific to vascular plants.

  5. Similar or different?

    DEFF Research Database (Denmark)

    Cornér, Solveig; Pyhältö, Kirsi; Peltonen, Jouni

    2018-01-01

    Previous research has identified researcher community and supervisory support as key determinants of the doctoral journey contributing to students’ persistence and robustness. However, we still know little about cross-cultural variation in the researcher community and supervisory support experien...... counter partners, whereas the Finnish students perceived lower levels of instrumental support than the Danish students. The findings imply that seemingly similar contexts hold valid differences in experienced social support and educational strategies at the PhD level....... experienced by PhD students within the same discipline. This study explores the support experiences of 381 PhD students within the humanities and social sciences from three research-intensive universities in Denmark (n=145) and Finland (n=236). The mixed methods design was utilized. The data were collected...... counter partners. The results also indicated that the only form of support in which the students expressed more matched support than mismatched support was informational support. Further investigation showed that the Danish students reported a high level of mismatch in emotional support than their Finnish...

  6. Hypothesis Testing in the Real World

    Science.gov (United States)

    Miller, Jeff

    2017-01-01

    Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…

  7. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  8. Reassessing the Trade-off Hypothesis

    DEFF Research Database (Denmark)

    Rosas, Guillermo; Manzetti, Luigi

    2015-01-01

    Do economic conditions drive voters to punish politicians that tolerate corruption? Previous scholarly work contends that citizens in young democracies support corrupt governments that are capable of promoting good economic outcomes, the so-called trade-off hypothesis. We test this hypothesis based...

  9. Mastery Learning and the Decreasing Variability Hypothesis.

    Science.gov (United States)

    Livingston, Jennifer A.; Gentile, J. Ronald

    1996-01-01

    This report results from studies that tested two variations of Bloom's decreasing variability hypothesis using performance on successive units of achievement in four graduate classrooms that used mastery learning procedures. Data do not support the decreasing variability hypothesis; rather, they show no change over time. (SM)

  10. Test of a hypothesis of realism in quantum theory using a Bayesian approach

    Science.gov (United States)

    Nikitin, N.; Toms, K.

    2017-05-01

    In this paper we propose a time-independent equality and time-dependent inequality, suitable for an experimental test of the hypothesis of realism. The derivation of these relations is based on the concept of conditional probability and on Bayes' theorem in the framework of Kolmogorov's axiomatics of probability theory. The equality obtained is intrinsically different from the well-known Greenberger-Horne-Zeilinger (GHZ) equality and its variants, because violation of the proposed equality might be tested in experiments with only two microsystems in a maximally entangled Bell state |Ψ-> , while a test of the GHZ equality requires at least three quantum systems in a special state |ΨGHZ> . The obtained inequality differs from Bell's, Wigner's, and Leggett-Garg inequalities, because it deals with spin s =1 /2 projections onto only two nonparallel directions at two different moments of time, while a test of the Bell and Wigner inequalities requires at least three nonparallel directions, and a test of the Leggett-Garg inequalities requires at least three distinct moments of time. Hence, the proposed inequality seems to open an additional experimental possibility to avoid the "contextuality loophole." Violation of the proposed equality and inequality is illustrated with the behavior of a pair of anticorrelated spins in an external magnetic field and also with the oscillations of flavor-entangled pairs of neutral pseudoscalar mesons.

  11. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    Science.gov (United States)

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  12. Implications of the Bohm-Aharonov hypothesis

    International Nuclear Information System (INIS)

    Ghirardi, G.C.; Rimini, A.; Weber, T.

    1976-01-01

    It is proved that the Bohm-Aharonov hypothesis concerning largerly separated subsystems of composite quantum systems implies that it is impossible to express the dynamical evolution in terms of the density operator

  13. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  14. The (not so) Immortal Strand Hypothesis

    OpenAIRE

    Tomasetti, Cristian; Bozic, Ivana

    2015-01-01

    Background: Non-random segregation of DNA strands during stem cell replication has been proposed as a mechanism to minimize accumulated genetic errors in stem cells of rapidly dividing tissues. According to this hypothesis, an “immortal” DNA strand is passed to the stem cell daughter and not the more differentiated cell, keeping the stem cell lineage replication error-free. After it was introduced, experimental evidence both in favor and against the hypothesis has been presented. Principal...

  15. Similarity, trust in institutions, affect, and populism

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Finucane, Melissa L.

    -based evaluations are fundamental to human information processing, they can contribute significantly to other judgments (such as the risk, cost-effectiveness, trustworthiness) of the same stimulus object. Although deliberation and analysis are certainly important in some decision-making circumstances, reliance...... on affect is a quicker, easier, and a more efficient way of navigating in a complex and uncertain world. Hence, many theorists give affect a direct and primary role in motivating behavior. Taken together, the results provide uncannily strong support for the value-similarity hypothesis, strengthening...... types of information about gene technology. The materials were attributed to different institutions. The results indicated that participants' trust in an institution was a function of the similarity between the position advocated in the materials and participants' own attitudes towards gene technology...

  16. Multiple hypothesis tracking for the cyber domain

    Science.gov (United States)

    Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.

    2011-09-01

    This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.

  17. Aminoglycoside antibiotics and autism: a speculative hypothesis

    Directory of Open Access Journals (Sweden)

    Manev Hari

    2001-10-01

    Full Text Available Abstract Background Recently, it has been suspected that there is a relationship between therapy with some antibiotics and the onset of autism; but even more curious, some children benefited transiently from a subsequent treatment with a different antibiotic. Here, we speculate how aminoglycoside antibiotics might be associated with autism. Presentation We hypothesize that aminoglycoside antibiotics could a trigger the autism syndrome in susceptible infants by causing the stop codon readthrough, i.e., a misreading of the genetic code of a hypothetical critical gene, and/or b improve autism symptoms by correcting the premature stop codon mutation in a hypothetical polymorphic gene linked to autism. Testing Investigate, retrospectively, whether a link exists between aminoglycoside use (which is not extensive in children and the onset of autism symptoms (hypothesis "a", or between amino glycoside use and improvement of these symptoms (hypothesis "b". Whereas a prospective study to test hypothesis "a" is not ethically justifiable, a study could be designed to test hypothesis "b". Implications It should be stressed that at this stage no direct evidence supports our speculative hypothesis and that its main purpose is to initiate development of new ideas that, eventually, would improve our understanding of the pathobiology of autism.

  18. Testing competing forms of the Milankovitch hypothesis

    DEFF Research Database (Denmark)

    Kaufmann, Robert K.; Juselius, Katarina

    2016-01-01

    We test competing forms of the Milankovitch hypothesis by estimating the coefficients and diagnostic statistics for a cointegrated vector autoregressive model that includes 10 climate variables and four exogenous variables for solar insolation. The estimates are consistent with the physical...... ice volume and solar insolation. The estimated adjustment dynamics show that solar insolation affects an array of climate variables other than ice volume, each at a unique rate. This implies that previous efforts to test the strong form of the Milankovitch hypothesis by examining the relationship...... that the latter is consistent with a weak form of the Milankovitch hypothesis and that it should be restated as follows: Internal climate dynamics impose perturbations on glacial cycles that are driven by solar insolation. Our results show that these perturbations are likely caused by slow adjustment between land...

  19. Rejecting the equilibrium-point hypothesis.

    Science.gov (United States)

    Gottlieb, G L

    1998-01-01

    The lambda version of the equilibrium-point (EP) hypothesis as developed by Feldman and colleagues has been widely used and cited with insufficient critical understanding. This article offers a small antidote to that lack. First, the hypothesis implicitly, unrealistically assumes identical transformations of lambda into muscle tension for antagonist muscles. Without that assumption, its definitions of command variables R, C, and lambda are incompatible and an EP is not defined exclusively by R nor is it unaffected by C. Second, the model assumes unrealistic and unphysiological parameters for the damping properties of the muscles and reflexes. Finally, the theory lacks rules for two of its three command variables. A theory of movement should offer insight into why we make movements the way we do and why we activate muscles in particular patterns. The EP hypothesis offers no unique ideas that are helpful in addressing either of these questions.

  20. The linear hypothesis and radiation carcinogenesis

    International Nuclear Information System (INIS)

    Roberts, P.B.

    1981-10-01

    An assumption central to most estimations of the carcinogenic potential of low levels of ionising radiation is that the risk always increases in direct proportion to the dose received. This assumption (the linear hypothesis) has been both strongly defended and attacked on several counts. It appears unlikely that conclusive, direct evidence on the validity of the hypothesis will be forthcoming. We review the major indirect arguments used in the debate. All of them are subject to objections that can seriously weaken their case. In the present situation, retention of the linear hypothesis as the basis of extrapolations from high to low dose levels can lead to excessive fears, over-regulation and unnecessarily expensive protection measures. To offset these possibilities, support is given to suggestions urging a cut-off dose, probably some fraction of natural background, below which risks can be deemed acceptable

  1. Rayleigh's hypothesis and the geometrical optics limit.

    Science.gov (United States)

    Elfouhaily, Tanos; Hahn, Thomas

    2006-09-22

    The Rayleigh hypothesis (RH) is often invoked in the theoretical and numerical treatment of rough surface scattering in order to decouple the analytical form of the scattered field. The hypothesis stipulates that the scattered field away from the surface can be extended down onto the rough surface even though it is formed by solely up-going waves. Traditionally this hypothesis is systematically used to derive the Volterra series under the small perturbation method which is equivalent to the low-frequency limit. In this Letter we demonstrate that the RH also carries the high-frequency or the geometrical optics limit, at least to first order. This finding has never been explicitly derived in the literature. Our result comforts the idea that the RH might be an exact solution under some constraints in the general case of random rough surfaces and not only in the case of small-slope deterministic periodic gratings.

  2. P value and the theory of hypothesis testing: an explanation for new researchers.

    Science.gov (United States)

    Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël

    2010-03-01

    In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

  3. Bell's inequalities and Kolmogorov's axioms

    Indian Academy of Sciences (India)

    will be called random events. ... The conditional probability of event A, given event B, is defined by ... Moreover, if A and B are independent, P(A B) = P(A). 3. .... if o and b are the settings, and one or other of the four exclusive proclivities.

  4. Bell's inequalities and Kolmogorov's axioms

    Indian Academy of Sciences (India)

    Abstract. After recalling proofs of the Bell inequality based on the assumptions of separability and of noncontextuality, the most general noncontextual contrapositive conditional probabilities consistent with the Aspect experiment are constructed. In general these probabilities are not all positive.

  5. On the generalized gravi-magnetic hypothesis

    International Nuclear Information System (INIS)

    Massa, C.

    1989-01-01

    According to a generalization of the gravi-magnetic hypothesis (GMH) any neutral mass moving in a curvilinear path with respect to an inertial frame creates a magnetic field, dependent on the curvature radius of the path. A simple astrophysical consequence of the generalized GMH is suggested considering the special cases of binary pulsars and binary neutron stars

  6. Remarks about the hypothesis of limiting fragmentation

    International Nuclear Information System (INIS)

    Chou, T.T.; Yang, C.N.

    1987-01-01

    Remarks are made about the hypothesis of limiting fragmentation. In particular, the concept of favored and disfavored fragment distribution is introduced. Also, a sum rule is proved leading to a useful quantity called energy-fragmentation fraction. (author). 11 refs, 1 fig., 2 tabs

  7. Multiple hypothesis clustering in radar plot extraction

    NARCIS (Netherlands)

    Huizing, A.G.; Theil, A.; Dorp, Ph. van; Ligthart, L.P.

    1995-01-01

    False plots and plots with inaccurate range and Doppler estimates may severely degrade the performance of tracking algorithms in radar systems. This paper describes how a multiple hypothesis clustering technique can be applied to mitigate the problems involved in plot extraction. The measures of

  8. The (not so immortal strand hypothesis

    Directory of Open Access Journals (Sweden)

    Cristian Tomasetti

    2015-03-01

    Significance: Utilizing an approach that is fundamentally different from previous efforts to confirm or refute the immortal strand hypothesis, we provide evidence against non-random segregation of DNA during stem cell replication. Our results strongly suggest that parental DNA is passed randomly to stem cell daughters and provides new insight into the mechanism of DNA replication in stem cells.

  9. A Developmental Study of the Infrahumanization Hypothesis

    Science.gov (United States)

    Martin, John; Bennett, Mark; Murray, Wayne S.

    2008-01-01

    Intergroup attitudes in children were examined based on Leyen's "infrahumanization hypothesis". This suggests that some uniquely human emotions, such as shame and guilt (secondary emotions), are reserved for the in-group, whilst other emotions that are not uniquely human and shared with animals, such as anger and pleasure (primary…

  10. Morbidity and Infant Development: A Hypothesis.

    Science.gov (United States)

    Pollitt, Ernesto

    1983-01-01

    Results of a study conducted in 14 villages of Sui Lin Township, Taiwan, suggest the hypothesis that, under conditions of extreme economic impoverishment and among children within populations where energy protein malnutrition is endemic, there is an inverse relationship between incidence of morbidity in infancy and measures of motor and mental…

  11. Diagnostic Hypothesis Generation and Human Judgment

    Science.gov (United States)

    Thomas, Rick P.; Dougherty, Michael R.; Sprenger, Amber M.; Harbison, J. Isaiah

    2008-01-01

    Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the…

  12. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    for stereo sequences, exploiting an interpolated intra-view SI and two inter-view SIs. The quality of the SI has a major impact on the DVC Rate-Distortion (RD) performance. As the inter-view SIs individually present lower RD performance compared with the intra-view SI, we propose multi-hypothesis decoding...

  13. [Resonance hypothesis of heart rate variability origin].

    Science.gov (United States)

    Sheĭkh-Zade, Iu R; Mukhambetaliev, G Kh; Cherednik, I L

    2009-09-01

    A hypothesis is advanced of the heart rate variability being subjected to beat-to-beat regulation of cardiac cycle duration in order to ensure the resonance interaction between respiratory and own fluctuation of the arterial system volume for minimization of power expenses of cardiorespiratory system. Myogenic, parasympathetic and sympathetic machanisms of heart rate variability are described.

  14. In Defense of Chi's Ontological Incompatibility Hypothesis

    Science.gov (United States)

    Slotta, James D.

    2011-01-01

    This article responds to an article by A. Gupta, D. Hammer, and E. F. Redish (2010) that asserts that M. T. H. Chi's (1992, 2005) hypothesis of an "ontological commitment" in conceptual development is fundamentally flawed. In this article, I argue that Chi's theoretical perspective is still very much intact and that the critique offered by Gupta…

  15. Vacuum counterexamples to the cosmic censorship hypothesis

    International Nuclear Information System (INIS)

    Miller, B.D.

    1981-01-01

    In cylindrically symmetric vacuum spacetimes it is possible to specify nonsingular initial conditions such that timelike singularities will (necessarily) evolve from these conditions. Examples are given; the spacetimes are somewhat analogous to one of the spherically symmetric counterexamples to the cosmic censorship hypothesis

  16. A novel hypothesis splitting method implementation for multi-hypothesis filters

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Andersen, Nils Axel

    2013-01-01

    The paper presents a multi-hypothesis filter library featuring a novel method for splitting Gaussians into ones with smaller variances. The library is written in C++ for high performance and the source code is open and free1. The multi-hypothesis filters commonly approximate the distribution tran...

  17. The Income Inequality Hypothesis Revisited : Assessing the Hypothesis Using Four Methodological Approaches

    NARCIS (Netherlands)

    Kragten, N.; Rözer, J.

    The income inequality hypothesis states that income inequality has a negative effect on individual’s health, partially because it reduces social trust. This article aims to critically assess the income inequality hypothesis by comparing several analytical strategies, namely OLS regression,

  18. Einstein's Revolutionary Light-Quantum Hypothesis

    Science.gov (United States)

    Stuewer, Roger H.

    2005-05-01

    The paper in which Albert Einstein proposed his light-quantum hypothesis was the only one of his great papers of 1905 that he himself termed ``revolutionary.'' Contrary to widespread belief, Einstein did not propose his light-quantum hypothesis ``to explain the photoelectric effect.'' Instead, he based his argument for light quanta on the statistical interpretation of the second law of thermodynamics, with the photoelectric effect being only one of three phenomena that he offered as possible experimental support for it. I will discuss Einstein's light-quantum hypothesis of 1905 and his introduction of the wave-particle duality in 1909 and then turn to the reception of his work on light quanta by his contemporaries. We will examine the reasons that prominent physicists advanced to reject Einstein's light-quantum hypothesis in succeeding years. Those physicists included Robert A. Millikan, even though he provided convincing experimental proof of the validity of Einstein's equation of the photoelectric effect in 1915. The turning point came after Arthur Holly Compton discovered the Compton effect in late 1922, but even then Compton's discovery was contested both on experimental and on theoretical grounds. Niels Bohr, in particular, had never accepted the reality of light quanta and now, in 1924, proposed a theory, the Bohr-Kramers-Slater theory, which assumed that energy and momentum were conserved only statistically in microscopic interactions. Only after that theory was disproved experimentally in 1925 was Einstein's revolutionary light-quantum hypothesis generally accepted by physicists---a full two decades after Einstein had proposed it.

  19. The Literal Translation Hypothesis in ESP Teaching/Learning Environments

    Directory of Open Access Journals (Sweden)

    Pedro A. Fuertes-Olivera

    2015-11-01

    Full Text Available Research on the characteristics of specialized vocabulary usually replicates studies that deal with general words, e.g. they typically describe frequent terms and focus on their linguistic characteristics to aid in the learning and acquisition of the terms. We dispute this practise, as we believe that the basic characteristic of terms is that they are coined to restrict meaning, i.e. to be as precise and as specific as possible in a particular context. For instance, around 70% of English and Spanish accounting terms are multi-word terms, most of which contain more than three orthographic words that syntactically behave in a way that is very different from the syntactic behaviour of the node on which they are formed (Fuertes-Olivera and Tarp, forthcoming. This has prompted us to propose a research framework that investigates whether or not the literal translation hypothesis, which has been addressed in several areas of translation studies, can also be applied in ESP teaching/learning environments. If plausible, the assumptions on which this hypothesis is based can shed light on how learners disambiguate terms they encounter. Within this framework, this paper presents evidence that the literal translation hypothesis is possible in ESP; it offers the results of a pilot study that sheds light on how this hypothesis may work, and also discusses its usability in the context of ESP learning. In particular, this paper presents strategies for teaching multi-word terms that are different from those currently based on corpus data. We believe that exercises such as “cloze”, “fill in” and similar “guessing” exercises must be abandoned in ESP teaching/learning environments. Instead, we propose exercises that reproduce L1 teaching and learning activities, i.e., exercises that are typically used when acquiring specialised knowledge and skills in any domain, e.g. taking part in meetings and giving presentations in a business context.

  20. Why do urban communities with similar conditions of social ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Urban marginal territories with similar levels of social exclusion might present di erent degrees of violence because of di erences in the capacity of the community to act to confront the phenomenon of violence. Research questions. Hypothesis. The research was carried out in. Outputs. Methodology. Household survey.

  1. A COMPARISON OF SEMANTIC SIMILARITY MODELS IN EVALUATING CONCEPT SIMILARITY

    Directory of Open Access Journals (Sweden)

    Q. X. Xu

    2012-08-01

    Full Text Available The semantic similarities are important in concept definition, recognition, categorization, interpretation, and integration. Many semantic similarity models have been established to evaluate semantic similarities of objects or/and concepts. To find out the suitability and performance of different models in evaluating concept similarities, we make a comparison of four main types of models in this paper: the geometric model, the feature model, the network model, and the transformational model. Fundamental principles and main characteristics of these models are introduced and compared firstly. Land use and land cover concepts of NLCD92 are employed as examples in the case study. The results demonstrate that correlations between these models are very high for a possible reason that all these models are designed to simulate the similarity judgement of human mind.

  2. Tests of the Giant Impact Hypothesis

    Science.gov (United States)

    Jones, J. H.

    1998-01-01

    The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.

  3. The discovered preference hypothesis - an empirical test

    DEFF Research Database (Denmark)

    Lundhede, Thomas; Ladenburg, Jacob; Olsen, Søren Bøye

    Using stated preference methods for valuation of non-market goods is known to be vulnerable to a range of biases. Some authors claim that these so-called anomalies in effect render the methods useless for the purpose. However, the Discovered Preference Hypothesis, as put forth by Plott [31], offers...... an nterpretation and explanation of biases which entails that the stated preference methods need not to be completely written off. In this paper we conduct a test for the validity and relevance of the DPH interpretation of biases. In a choice experiment concerning preferences for protection of Danish nature areas...... as respondents evaluate more and more choice sets. This finding supports the Discovered Preference Hypothesis interpretation and explanation of starting point bias....

  4. The Hypothesis-Driven Physical Examination.

    Science.gov (United States)

    Garibaldi, Brian T; Olson, Andrew P J

    2018-05-01

    The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. MOLIERE: Automatic Biomedical Hypothesis Generation System.

    Science.gov (United States)

    Sybrandt, Justin; Shtutman, Michael; Safro, Ilya

    2017-08-01

    Hypothesis generation is becoming a crucial time-saving technique which allows biomedical researchers to quickly discover implicit connections between important concepts. Typically, these systems operate on domain-specific fractions of public medical data. MOLIERE, in contrast, utilizes information from over 24.5 million documents. At the heart of our approach lies a multi-modal and multi-relational network of biomedical objects extracted from several heterogeneous datasets from the National Center for Biotechnology Information (NCBI). These objects include but are not limited to scientific papers, keywords, genes, proteins, diseases, and diagnoses. We model hypotheses using Latent Dirichlet Allocation applied on abstracts found near shortest paths discovered within this network, and demonstrate the effectiveness of MOLIERE by performing hypothesis generation on historical data. Our network, implementation, and resulting data are all publicly available for the broad scientific community.

  6. The Method of Hypothesis in Plato's Philosophy

    Directory of Open Access Journals (Sweden)

    Malihe Aboie Mehrizi

    2016-09-01

    Full Text Available The article deals with the examination of method of hypothesis in Plato's philosophy. This method, respectively, will be examined in three dialogues of Meno, Phaedon and Republic in which it is explicitly indicated. It will be shown the process of change of Plato’s attitude towards the position and usage of the method of hypothesis in his realm of philosophy. In Meno, considering the geometry, Plato attempts to introduce a method that can be used in the realm of philosophy. But, ultimately in Republic, Plato’s special attention to the method and its importance in the philosophical investigations, leads him to revise it. Here, finally Plato introduces the particular method of philosophy, i.e., the dialectic

  7. Debates—Hypothesis testing in hydrology: Introduction

    Science.gov (United States)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  8. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  9. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  10. Reverse hypothesis machine learning a practitioner's perspective

    CERN Document Server

    Kulkarni, Parag

    2017-01-01

    This book introduces a paradigm of reverse hypothesis machines (RHM), focusing on knowledge innovation and machine learning. Knowledge- acquisition -based learning is constrained by large volumes of data and is time consuming. Hence Knowledge innovation based learning is the need of time. Since under-learning results in cognitive inabilities and over-learning compromises freedom, there is need for optimal machine learning. All existing learning techniques rely on mapping input and output and establishing mathematical relationships between them. Though methods change the paradigm remains the same—the forward hypothesis machine paradigm, which tries to minimize uncertainty. The RHM, on the other hand, makes use of uncertainty for creative learning. The approach uses limited data to help identify new and surprising solutions. It focuses on improving learnability, unlike traditional approaches, which focus on accuracy. The book is useful as a reference book for machine learning researchers and professionals as ...

  11. Exploring heterogeneous market hypothesis using realized volatility

    Science.gov (United States)

    Chin, Wen Cheong; Isa, Zaidi; Mohd Nor, Abu Hassan Shaari

    2013-04-01

    This study investigates the heterogeneous market hypothesis using high frequency data. The cascaded heterogeneous trading activities with different time durations are modelled by the heterogeneous autoregressive framework. The empirical study indicated the presence of long memory behaviour and predictability elements in the financial time series which supported heterogeneous market hypothesis. Besides the common sum-of-square intraday realized volatility, we also advocated two power variation realized volatilities in forecast evaluation and risk measurement in order to overcome the possible abrupt jumps during the credit crisis. Finally, the empirical results are used in determining the market risk using the value-at-risk approach. The findings of this study have implications for informationally market efficiency analysis, portfolio strategies and risk managements.

  12. Renewing the Respect for Similarity

    Directory of Open Access Journals (Sweden)

    Shimon eEdelman

    2012-07-01

    Full Text Available In psychology, the concept of similarity has traditionally evoked a mixture of respect, stemmingfrom its ubiquity and intuitive appeal, and concern, due to its dependence on the framing of the problemat hand and on its context. We argue for a renewed focus on similarity as an explanatory concept, bysurveying established results and new developments in the theory and methods of similarity-preservingassociative lookup and dimensionality reduction — critical components of many cognitive functions, aswell as of intelligent data management in computer vision. We focus in particular on the growing familyof algorithms that support associative memory by performing hashing that respects local similarity, andon the uses of similarity in representing structured objects and scenes. Insofar as these similarity-basedideas and methods are useful in cognitive modeling and in AI applications, they should be included inthe core conceptual toolkit of computational neuroscience.

  13. Water Taxation and the Double Dividend Hypothesis

    OpenAIRE

    Nicholas Kilimani

    2014-01-01

    The double dividend hypothesis contends that environmental taxes have the potential to yield multiple benefits for the economy. However, empirical evidence of the potential impacts of environmental taxation in developing countries is still limited. This paper seeks to contribute to the literature by exploring the impact of a water tax in a developing country context, with Uganda as a case study. Policy makers in Uganda are exploring ways of raising revenue by taxing environmental goods such a...

  14. [Working memory, phonological awareness and spelling hypothesis].

    Science.gov (United States)

    Gindri, Gigiane; Keske-Soares, Márcia; Mota, Helena Bolli

    2007-01-01

    Working memory, phonological awareness and spelling hypothesis. To verify the relationship between working memory, phonological awareness and spelling hypothesis in pre-school children and first graders. Participants of this study were 90 students, belonging to state schools, who presented typical linguistic development. Forty students were preschoolers, with the average age of six and 50 students were first graders, with the average age of seven. Participants were submitted to an evaluation of the working memory abilities based on the Working Memory Model (Baddeley, 2000), involving phonological loop. Phonological loop was evaluated using the Auditory Sequential Test, subtest 5 of Illinois Test of Psycholinguistic Abilities (ITPA), Brazilian version (Bogossian & Santos, 1977), and the Meaningless Words Memory Test (Kessler, 1997). Phonological awareness abilities were investigated using the Phonological Awareness: Instrument of Sequential Assessment (CONFIAS - Moojen et al., 2003), involving syllabic and phonemic awareness tasks. Writing was characterized according to Ferreiro & Teberosky (1999). Preschoolers presented the ability of repeating sequences of 4.80 digits and 4.30 syllables. Regarding phonological awareness, the performance in the syllabic level was of 19.68 and in the phonemic level was of 8.58. Most of the preschoolers demonstrated to have a pre-syllabic writing hypothesis. First graders repeated, in average, sequences of 5.06 digits and 4.56 syllables. These children presented a phonological awareness of 31.12 in the syllabic level and of 16.18 in the phonemic level, and demonstrated to have an alphabetic writing hypothesis. The performance of working memory, phonological awareness and spelling level are inter-related, as well as being related to chronological age, development and scholarity.

  15. Privacy on Hypothesis Testing in Smart Grids

    OpenAIRE

    Li, Zuxing; Oechtering, Tobias

    2015-01-01

    In this paper, we study the problem of privacy information leakage in a smart grid. The privacy risk is assumed to be caused by an unauthorized binary hypothesis testing of the consumer's behaviour based on the smart meter readings of energy supplies from the energy provider. Another energy supplies are produced by an alternative energy source. A controller equipped with an energy storage device manages the energy inflows to satisfy the energy demand of the consumer. We study the optimal ener...

  16. Box-particle probability hypothesis density filtering

    OpenAIRE

    Schikora, M.; Gning, A.; Mihaylova, L.; Cremers, D.; Koch, W.

    2014-01-01

    This paper develops a novel approach for multitarget tracking, called box-particle probability hypothesis density filter (box-PHD filter). The approach is able to track multiple targets and estimates the unknown number of targets. Furthermore, it is capable of dealing with three sources of uncertainty: stochastic, set-theoretic, and data association uncertainty. The box-PHD filter reduces the number of particles significantly, which improves the runtime considerably. The small number of box-p...

  17. Quantum effects and hypothesis of cosmic censorship

    International Nuclear Information System (INIS)

    Parnovskij, S.L.

    1989-01-01

    It is shown that filamentary characteristics with linear mass of less than 10 25 g/cm distort slightly the space-time at distances, exceeding Planck ones. Their formation doesn't change vacuum energy and doesn't lead to strong quantum radiation. Therefore, the problem of their occurrence can be considered within the framework of classical collapse. Quantum effects can be ignored when considering the problem of validity of cosmic censorship hypothesis

  18. Consumer health information seeking as hypothesis testing.

    Science.gov (United States)

    Keselman, Alla; Browne, Allen C; Kaufman, David R

    2008-01-01

    Despite the proliferation of consumer health sites, lay individuals often experience difficulty finding health information online. The present study attempts to understand users' information seeking difficulties by drawing on a hypothesis testing explanatory framework. It also addresses the role of user competencies and their interaction with internet resources. Twenty participants were interviewed about their understanding of a hypothetical scenario about a family member suffering from stable angina and then searched MedlinePlus consumer health information portal for information on the problem presented in the scenario. Participants' understanding of heart disease was analyzed via semantic analysis. Thematic coding was used to describe information seeking trajectories in terms of three key strategies: verification of the primary hypothesis, narrowing search within the general hypothesis area and bottom-up search. Compared to an expert model, participants' understanding of heart disease involved different key concepts, which were also differently grouped and defined. This understanding provided the framework for search-guiding hypotheses and results interpretation. Incorrect or imprecise domain knowledge led individuals to search for information on irrelevant sites, often seeking out data to confirm their incorrect initial hypotheses. Online search skills enhanced search efficiency, but did not eliminate these difficulties. Regardless of their web experience and general search skills, lay individuals may experience difficulty with health information searches. These difficulties may be related to formulating and evaluating hypotheses that are rooted in their domain knowledge. Informatics can provide support at the levels of health information portals, individual websites, and consumer education tools.

  19. Self-similar cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Chao, W Z [Cambridge Univ. (UK). Dept. of Applied Mathematics and Theoretical Physics

    1981-07-01

    The kinematics and dynamics of self-similar cosmological models are discussed. The degrees of freedom of the solutions of Einstein's equations for different types of models are listed. The relation between kinematic quantities and the classifications of the self-similarity group is examined. All dust local rotational symmetry models have been found.

  20. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  1. Dynamic similarity in erosional processes

    Science.gov (United States)

    Scheidegger, A.E.

    1963-01-01

    A study is made of the dynamic similarity conditions obtaining in a variety of erosional processes. The pertinent equations for each type of process are written in dimensionless form; the similarity conditions can then easily be deduced. The processes treated are: raindrop action, slope evolution and river erosion. ?? 1963 Istituto Geofisico Italiano.

  2. Personalized recommendation with corrected similarity

    International Nuclear Information System (INIS)

    Zhu, Xuzhen; Tian, Hui; Cai, Shimin

    2014-01-01

    Personalized recommendation has attracted a surge of interdisciplinary research. Especially, similarity-based methods in applications of real recommendation systems have achieved great success. However, the computations of similarities are overestimated or underestimated, in particular because of the defective strategy of unidirectional similarity estimation. In this paper, we solve this drawback by leveraging mutual correction of forward and backward similarity estimations, and propose a new personalized recommendation index, i.e., corrected similarity based inference (CSI). Through extensive experiments on four benchmark datasets, the results show a greater improvement of CSI in comparison with these mainstream baselines. And a detailed analysis is presented to unveil and understand the origin of such difference between CSI and mainstream indices. (paper)

  3. Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics

    Science.gov (United States)

    Zhang, Ping; Wang, Fei; Hu, Jianying; Sorrentino, Robert

    2014-01-01

    The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance. PMID:25717413

  4. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  5. Updating the lamellar hypothesis of hippocampal organization

    Directory of Open Access Journals (Sweden)

    Robert S Sloviter

    2012-12-01

    Full Text Available In 1971, Andersen and colleagues proposed that excitatory activity in the entorhinal cortex propagates topographically to the dentate gyrus, and on through a trisynaptic circuit lying within transverse hippocampal slices or lamellae [Andersen, Bliss, and Skrede. 1971. Lamellar organization of hippocampal pathways. Exp Brain Res 13, 222-238]. In this way, a relatively simple structure might mediate complex functions in a manner analogous to the way independent piano keys can produce a nearly infinite variety of unique outputs. The lamellar hypothesis derives primary support from the lamellar distribution of dentate granule cell axons (the mossy fibers, which innervate dentate hilar neurons and area CA3 pyramidal cells and interneurons within the confines of a thin transverse hippocampal segment. Following the initial formulation of the lamellar hypothesis, anatomical studies revealed that unlike granule cells, hilar mossy cells, CA3 pyramidal cells, and Layer II entorhinal cells all form axonal projections that are more divergent along the longitudinal axis than the clearly lamellar mossy fiber pathway. The existence of pathways with translamellar distribution patterns has been interpreted, incorrectly in our view, as justifying outright rejection of the lamellar hypothesis [Amaral and Witter. 1989. The three-dimensional organization of the hippocampal formation: a review of anatomical data. Neuroscience 31, 571-591]. We suggest that the functional implications of longitudinally-projecting axons depend not on whether they exist, but on what they do. The observation that focal granule cell layer discharges normally inhibit, rather than excite, distant granule cells suggests that longitudinal axons in the dentate gyrus may mediate "lateral" inhibition and define lamellar function, rather than undermine it. In this review, we attempt a reconsideration of the evidence that most directly impacts the physiological concept of hippocampal lamellar

  6. Hypothesis Testing as an Act of Rationality

    Science.gov (United States)

    Nearing, Grey

    2017-04-01

    Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.

  7. The conscious access hypothesis: Explaining the consciousness.

    Science.gov (United States)

    Prakash, Ravi

    2008-01-01

    The phenomenon of conscious awareness or consciousness is complicated but fascinating. Although this concept has intrigued the mankind since antiquity, exploration of consciousness from scientific perspectives is not very old. Among myriad of theories regarding nature, functions and mechanism of consciousness, off late, cognitive theories have received wider acceptance. One of the most exciting hypotheses in recent times has been the "conscious access hypotheses" based on the "global workspace model of consciousness". It underscores an important property of consciousness, the global access of information in cerebral cortex. Present article reviews the "conscious access hypothesis" in terms of its theoretical underpinnings as well as experimental supports it has received.

  8. Interstellar colonization and the zoo hypothesis

    International Nuclear Information System (INIS)

    Jones, E.M.

    1978-01-01

    Michael Hart and others have pointed out that current estimates of the number of technological civilizations arisen in the Galaxy since its formation is in fundamental conflict with the expectation that such a civilization could colonize and utilize the entire Galaxy in 10 to 20 million years. This dilemma can be called Hart's paradox. Resolution of the paradox requires that one or more of the following are true: we are the Galaxy's first technical civilization; interstellar travel is immensely impractical or simply impossible; technological civilizations are very short-lived; or we inhabit a wildnerness preserve. The latter is the zoo hypothesis

  9. RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS

    Directory of Open Access Journals (Sweden)

    Nicolae-Marius JULA

    2017-05-01

    Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.

  10. Confluence Model or Resource Dilution Hypothesis?

    DEFF Research Database (Denmark)

    Jæger, Mads

    have a negative effect on educational attainment most studies cannot distinguish empirically between the CM and the RDH. In this paper, I use the different theoretical predictions in the CM and the RDH on the role of cognitive ability as a partial or complete mediator of the sibship size effect......Studies on family background often explain the negative effect of sibship size on educational attainment by one of two theories: the Confluence Model (CM) or the Resource Dilution Hypothesis (RDH). However, as both theories – for substantively different reasons – predict that sibship size should...

  11. Set theory and the continuum hypothesis

    CERN Document Server

    Cohen, Paul J

    2008-01-01

    This exploration of a notorious mathematical problem is the work of the man who discovered the solution. The independence of the continuum hypothesis is the focus of this study by Paul J. Cohen. It presents not only an accessible technical explanation of the author's landmark proof but also a fine introduction to mathematical logic. An emeritus professor of mathematics at Stanford University, Dr. Cohen won two of the most prestigious awards in mathematics: in 1964, he was awarded the American Mathematical Society's Bôcher Prize for analysis; and in 1966, he received the Fields Medal for Logic.

  12. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  13. Phonological similarity effect in complex span task.

    Science.gov (United States)

    Camos, Valérie; Mora, Gérôme; Barrouillet, Pierre

    2013-01-01

    The aim of our study was to test the hypothesis that two systems are involved in verbal working memory; one is specifically dedicated to the maintenance of phonological representations through verbal rehearsal while the other would maintain multimodal representations through attentional refreshing. This theoretical framework predicts that phonologically related phenomena such as the phonological similarity effect (PSE) should occur when the domain-specific system is involved in maintenance, but should disappear when concurrent articulation hinders its use. Impeding maintenance in the domain-general system by a concurrent attentional demand should impair recall performance without affecting PSE. In three experiments, we manipulated the concurrent articulation and the attentional demand induced by the processing component of complex span tasks in which participants had to maintain lists of either similar or dissimilar words. Confirming our predictions, PSE affected recall performance in complex span tasks. Although both the attentional demand and the articulatory requirement of the concurrent task impaired recall, only the induction of an articulatory suppression during maintenance made the PSE disappear. These results suggest a duality in the systems devoted to verbal maintenance in the short term, constraining models of working memory.

  14. Hypothesis-driven physical examination curriculum.

    Science.gov (United States)

    Allen, Sharon; Olson, Andrew; Menk, Jeremiah; Nixon, James

    2017-12-01

    Medical students traditionally learn physical examination skills as a rote list of manoeuvres. Alternatives like hypothesis-driven physical examination (HDPE) may promote students' understanding of the contribution of physical examination to diagnostic reasoning. We sought to determine whether first-year medical students can effectively learn to perform a physical examination using an HDPE approach, and then tailor the examination to specific clinical scenarios. Medical students traditionally learn physical examination skills as a rote list of manoeuvres CONTEXT: First-year medical students at the University of Minnesota were taught both traditional and HDPE approaches during a required 17-week clinical skills course in their first semester. The end-of-course evaluation assessed HDPE skills: students were assigned one of two cardiopulmonary cases. Each case included two diagnostic hypotheses. During an interaction with a standardised patient, students were asked to select physical examination manoeuvres in order to make a final diagnosis. Items were weighted and selection order was recorded. First-year students with minimal pathophysiology performed well. All students selected the correct diagnosis. Importantly, students varied the order when selecting examination manoeuvres depending on the diagnoses under consideration, demonstrating early clinical decision-making skills. An early introduction to HDPE may reinforce physical examination skills for hypothesis generation and testing, and can foster early clinical decision-making skills. This has important implications for further research in physical examination instruction. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  15. A default Bayesian hypothesis test for mediation.

    Science.gov (United States)

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  16. Gaussian Hypothesis Testing and Quantum Illumination.

    Science.gov (United States)

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  17. Inoculation stress hypothesis of environmental enrichment.

    Science.gov (United States)

    Crofton, Elizabeth J; Zhang, Yafang; Green, Thomas A

    2015-02-01

    One hallmark of psychiatric conditions is the vast continuum of individual differences in susceptibility vs. resilience resulting from the interaction of genetic and environmental factors. The environmental enrichment paradigm is an animal model that is useful for studying a range of psychiatric conditions, including protective phenotypes in addiction and depression models. The major question is how environmental enrichment, a non-drug and non-surgical manipulation, can produce such robust individual differences in such a wide range of behaviors. This paper draws from a variety of published sources to outline a coherent hypothesis of inoculation stress as a factor producing the protective enrichment phenotypes. The basic tenet suggests that chronic mild stress from living in a complex environment and interacting non-aggressively with conspecifics can inoculate enriched rats against subsequent stressors and/or drugs of abuse. This paper reviews the enrichment phenotypes, mulls the fundamental nature of environmental enrichment vs. isolation, discusses the most appropriate control for environmental enrichment, and challenges the idea that cortisol/corticosterone equals stress. The intent of the inoculation stress hypothesis of environmental enrichment is to provide a scaffold with which to build testable hypotheses for the elucidation of the molecular mechanisms underlying these protective phenotypes and thus provide new therapeutic targets to treat psychiatric/neurological conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. The Debt Overhang Hypothesis: Evidence from Pakistan

    Directory of Open Access Journals (Sweden)

    Shah Muhammad Imran

    2016-04-01

    Full Text Available This study investigates the debt overhang hypothesis for Pakistan in the period 1960-2007. The study examines empirically the dynamic behaviour of GDP, debt services, the employed labour force and investment using the time series concepts of unit roots, cointegration, error correlation and causality. Our findings suggest that debt-servicing has a negative impact on the productivity of both labour and capital, and that in turn has adversely affected economic growth. By severely constraining the ability of the country to service debt, this lends support to the debt-overhang hypothesis in Pakistan. The long run relation between debt services and economic growth implies that future increases in output will drain away in form of high debt service payments to lender country as external debt acts like a tax on output. More specifically, foreign creditors will benefit more from the rise in productivity than will domestic producers and labour. This suggests that domestic labour and capital are the ultimate losers from this heavy debt burden.

  19. Roots and Route of the Artification Hypothesis

    Directory of Open Access Journals (Sweden)

    Ellen Dissanayake

    2017-08-01

    Full Text Available Over four decades, my ideas about the arts in human evolution have themselves evolved, from an original notion of art as a human behaviour of “making special” to a full-fledged hypothesis of artification. A summary of the gradual developmental path (or route of the hypothesis, based on ethological principles and concepts, is given, and an argument presented in which artification is described as an exaptation whose roots lie in adaptive features of ancestral mother–infant interaction that contributed to infant survival and maternal reproductive success. I show how the interaction displays features of a ritualised behavior whose operations (formalization, repetition, exaggeration, and elaboration can be regarded as characteristic elements of human ritual ceremonies as well as of art (including song, dance, performance, literary language, altered surroundings, and other examples of making ordinary sounds, movement, language, environments, objects, and bodies extraordinary. Participation in these behaviours in ritual practices served adaptive ends in early Homo by coordinating brain and body states, and thereby emotionally bonding members of a group in common cause as well as reducing existential anxiety in individuals. A final section situates artification within contemporary philosophical and popular ideas of art, claiming that artifying is not a synonym for or definition of art but foundational to any evolutionary discussion of artistic/aesthetic behaviour.

  20. Hypothesis: does ochratoxin A cause testicular cancer?

    Science.gov (United States)

    Schwartz, Gary G

    2002-02-01

    Little is known about the etiology of testicular cancer, which is the most common cancer among young men. Epidemiologic data point to a carcinogenic exposure in early life or in utero, but the nature of the exposure is unknown. We hypothesize that the mycotoxin, ochratoxin A, is a cause of testicular cancer. Ochratoxin A is a naturally occurring contaminant of cereals, pigmeat, and other foods and is a known genotoxic carcinogen in animals. The major features of the descriptive epidemiology of testicular cancer (a high incidence in northern Europe, increasing incidence over time, and associations with high socioeconomic status, and with poor semen quality) are all associated with exposure to ochratoxin A. Exposure of animals to ochratoxin A via the diet or via in utero transfer induces adducts in testicular DNA. We hypothesize that consumption of foods contaminated with ochratoxin A during pregnancy and/or childhood induces lesions in testicular DNA and that puberty promotes these lesions to testicular cancer. We tested the ochratoxin A hypothesis using ecologic data on the per-capita consumption of cereals, coffee, and pigmeat, the principal dietary sources of ochratoxin A. Incidence rates for testicular cancer in 20 countries were significantly correlated with the per-capita consumption of coffee and pigmeat (r = 0.49 and 0.54, p = 0.03 and 0.01). The ochratoxin A hypothesis offers a coherent explanation for much of the descriptive epidemiology of testicular cancer and suggests new avenues for analytic research.

  1. Urbanization and the more-individuals hypothesis.

    Science.gov (United States)

    Chiari, Claudia; Dinetti, Marco; Licciardello, Cinzia; Licitra, Gaetano; Pautasso, Marco

    2010-03-01

    1. Urbanization is a landscape process affecting biodiversity world-wide. Despite many urban-rural studies of bird assemblages, it is still unclear whether more species-rich communities have more individuals, regardless of the level of urbanization. The more-individuals hypothesis assumes that species-rich communities have larger populations, thus reducing the chance of local extinctions. 2. Using newly collated avian distribution data for 1 km(2) grid cells across Florence, Italy, we show a significantly positive relationship between species richness and assemblage abundance for the whole urban area. This richness-abundance relationship persists for the 1 km(2) grid cells with less than 50% of urbanized territory, as well as for the remaining grid cells, with no significant difference in the slope of the relationship. These results support the more-individuals hypothesis as an explanation of patterns in species richness, also in human modified and fragmented habitats. 3. However, the intercept of the species richness-abundance relationship is significantly lower for highly urbanized grid cells. Our study confirms that urban communities have lower species richness but counters the common notion that assemblages in densely urbanized ecosystems have more individuals. In Florence, highly inhabited areas show fewer species and lower assemblage abundance. 4. Urbanized ecosystems are an ongoing large-scale natural experiment which can be used to test ecological theories empirically.

  2. Pythoscape: a framework for generation of large protein similarity networks.

    Science.gov (United States)

    Barber, Alan E; Babbitt, Patricia C

    2012-11-01

    Pythoscape is a framework implemented in Python for processing large protein similarity networks for visualization in other software packages. Protein similarity networks are graphical representations of sequence, structural and other similarities among proteins for which pairwise all-by-all similarity connections have been calculated. Mapping of biological and other information to network nodes or edges enables hypothesis creation about sequence-structure-function relationships across sets of related proteins. Pythoscape provides several options to calculate pairwise similarities for input sequences or structures, applies filters to network edges and defines sets of similar nodes and their associated data as single nodes (termed representative nodes) for compression of network information and output data or formatted files for visualization.

  3. The Younger Dryas impact hypothesis: A requiem

    Science.gov (United States)

    Pinter, Nicholas; Scott, Andrew C.; Daulton, Tyrone L.; Podoll, Andrew; Koeberl, Christian; Anderson, R. Scott; Ishman, Scott E.

    2011-06-01

    The Younger Dryas (YD) impact hypothesis is a recent theory that suggests that a cometary or meteoritic body or bodies hit and/or exploded over North America 12,900 years ago, causing the YD climate episode, extinction of Pleistocene megafauna, demise of the Clovis archeological culture, and a range of other effects. Since gaining widespread attention in 2007, substantial research has focused on testing the 12 main signatures presented as evidence of a catastrophic extraterrestrial event 12,900 years ago. Here we present a review of the impact hypothesis, including its evolution and current variants, and of efforts to test and corroborate the hypothesis. The physical evidence interpreted as signatures of an impact event can be separated into two groups. The first group consists of evidence that has been largely rejected by the scientific community and is no longer in widespread discussion, including: particle tracks in archeological chert; magnetic nodules in Pleistocene bones; impact origin of the Carolina Bays; and elevated concentrations of radioactivity, iridium, and fullerenes enriched in 3He. The second group consists of evidence that has been active in recent research and discussions: carbon spheres and elongates, magnetic grains and magnetic spherules, byproducts of catastrophic wildfire, and nanodiamonds. Over time, however, these signatures have also seen contrary evidence rather than support. Recent studies have shown that carbon spheres and elongates do not represent extraterrestrial carbon nor impact-induced megafires, but are indistinguishable from fungal sclerotia and arthropod fecal material that are a small but common component of many terrestrial deposits. Magnetic grains and spherules are heterogeneously distributed in sediments, but reported measurements of unique peaks in concentrations at the YD onset have yet to be reproduced. The magnetic grains are certainly just iron-rich detrital grains, whereas reported YD magnetic spherules are

  4. Why Does REM Sleep Occur? A Wake-up Hypothesis

    Directory of Open Access Journals (Sweden)

    Dr. W. R. eKlemm

    2011-09-01

    Full Text Available Brain activity differs in the various sleep stages and in conscious wakefulness. Awakening from sleep requires restoration of the complex nerve impulse patterns in neuronal network assemblies necessary to re-create and sustain conscious wakefulness. Herein I propose that the brain uses REM to help wake itself up after it has had a sufficient amount of sleep. Evidence suggesting this hypothesis includes the facts that, 1 when first going to sleep, the brain plunges into Stage N3 (formerly called Stage IV, a deep abyss of sleep, and, as the night progresses, the sleep is punctuated by episodes of REM that become longer and more frequent toward morning, 2 conscious-like dreams are a reliable component of the REM state in which the dreamer is an active mental observer or agent in the dream, 3 the last awakening during a night’s sleep usually occurs in a REM episode during or at the end of a dream, 4 both REM and awake consciousness seem to arise out of a similar brainstem ascending arousal system 5 N3 is a functionally perturbed state that eventually must be corrected so that embodied brain can direct adaptive behavior, and 6 corticofugal projections to brainstem arousal areas provide a way to trigger increased cortical activity in REM to progressively raise the sleeping brain to the threshold required for wakefulness. This paper shows how the hypothesis conforms to common experience and has substantial predictive and explanatory power regarding the phenomenology of sleep in terms of ontogeny, aging, phylogeny, abnormal/disease states, cognition, and behavioral physiology. That broad range of consistency is not matched by competing theories, which are summarized herein. Specific ways to test this wake-up hypothesis are suggested. Such research could lead to a better understanding of awake consciousness.

  5. Why does rem sleep occur? A wake-up hypothesis.

    Science.gov (United States)

    Klemm, W R

    2011-01-01

    Brain activity differs in the various sleep stages and in conscious wakefulness. Awakening from sleep requires restoration of the complex nerve impulse patterns in neuronal network assemblies necessary to re-create and sustain conscious wakefulness. Herein I propose that the brain uses rapid eye movement (REM) to help wake itself up after it has had a sufficient amount of sleep. Evidence suggesting this hypothesis includes the facts that, (1) when first going to sleep, the brain plunges into Stage N3 (formerly called Stage IV), a deep abyss of sleep, and, as the night progresses, the sleep is punctuated by episodes of REM that become longer and more frequent toward morning, (2) conscious-like dreams are a reliable component of the REM state in which the dreamer is an active mental observer or agent in the dream, (3) the last awakening during a night's sleep usually occurs in a REM episode during or at the end of a dream, (4) both REM and awake consciousness seem to arise out of a similar brainstem ascending arousal system (5) N3 is a functionally perturbed state that eventually must be corrected so that embodied brain can direct adaptive behavior, and (6) cortico-fugal projections to brainstem arousal areas provide a way to trigger increased cortical activity in REM to progressively raise the sleeping brain to the threshold required for wakefulness. This paper shows how the hypothesis conforms to common experience and has substantial predictive and explanatory power regarding the phenomenology of sleep in terms of ontogeny, aging, phylogeny, abnormal/disease states, cognition, and behavioral physiology. That broad range of consistency is not matched by competing theories, which are summarized herein. Specific ways to test this wake-up hypothesis are suggested. Such research could lead to a better understanding of awake consciousness.

  6. Domain similarity based orthology detection.

    Science.gov (United States)

    Bitard-Feildel, Tristan; Kemena, Carsten; Greenwood, Jenny M; Bornberg-Bauer, Erich

    2015-05-13

    Orthologous protein detection software mostly uses pairwise comparisons of amino-acid sequences to assert whether two proteins are orthologous or not. Accordingly, when the number of sequences for comparison increases, the number of comparisons to compute grows in a quadratic order. A current challenge of bioinformatic research, especially when taking into account the increasing number of sequenced organisms available, is to make this ever-growing number of comparisons computationally feasible in a reasonable amount of time. We propose to speed up the detection of orthologous proteins by using strings of domains to characterize the proteins. We present two new protein similarity measures, a cosine and a maximal weight matching score based on domain content similarity, and new software, named porthoDom. The qualities of the cosine and the maximal weight matching similarity measures are compared against curated datasets. The measures show that domain content similarities are able to correctly group proteins into their families. Accordingly, the cosine similarity measure is used inside porthoDom, the wrapper developed for proteinortho. porthoDom makes use of domain content similarity measures to group proteins together before searching for orthologs. By using domains instead of amino acid sequences, the reduction of the search space decreases the computational complexity of an all-against-all sequence comparison. We demonstrate that representing and comparing proteins as strings of discrete domains, i.e. as a concatenation of their unique identifiers, allows a drastic simplification of search space. porthoDom has the advantage of speeding up orthology detection while maintaining a degree of accuracy similar to proteinortho. The implementation of porthoDom is released using python and C++ languages and is available under the GNU GPL licence 3 at http://www.bornberglab.org/pages/porthoda .

  7. Approaches to informed consent for hypothesis-testing and hypothesis-generating clinical genomics research.

    Science.gov (United States)

    Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G

    2012-10-10

    Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.

  8. Learning-Related Changes in Adolescents' Neural Networks during Hypothesis-Generating and Hypothesis-Understanding Training

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yongju

    2012-01-01

    Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…

  9. The delphic oracle and the ethylene-intoxication hypothesis.

    Science.gov (United States)

    Foster, J; Lehoux, D

    2007-01-01

    An interdisciplinary team of scientists--including an archeologist, a geologist, a chemist, and a toxicologist--has argued that ethylene intoxication was the probable cause of the High Priestess of Delphi's divinatory (mantic) trances. The claim that the High Priestess of Delphi entered a mantic state because of ethylene intoxication enjoyed widespread reception in specialist academic journals, science magazines, and newspapers. This article uses a similar interdisciplinary approach to show that this hypothesis is implausible since it is based on problematic scientific and textual evidence, as well as a fallacious argument. The main issue raised by this counterargument is not that a particular scientific hypothesis or conjecture turned out to be false. (This is expected in scientific investigation.) Rather, the main issue is that it was a positivist disposition that originally led readers to associate the evidence presented in such a way that it seemed to point to the conclusion, even when the evidence did not support the conclusion. We conclude by observing that positivist dispositions can lead to the acceptance of claims because they have a scientific form, not because they are grounded in robust evidence and sound argument.

  10. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  11. From heresy to dogma in accounts of opposition to Howard Temin's DNA provirus hypothesis.

    Science.gov (United States)

    Marcum, James A

    2002-01-01

    In 1964 the Wisconsin virologist Howard Temin proposed the DNA provirus hypothesis to explain the mechanism by which a cancer-producing virus containing only RNA infects and transforms cells. His hypothesis reversed the flow of genetic information, as ordained by the central dogma of molecular biology. Although there was initial opposition to his hypothesis it was widely accepted, after the discovery of reverse transcriptase in 1970. Most accounts of Temin's hypothesis after the discovery portray the hypothesis as heretical, because it challenged the central dogma. Temin himself in his Nobel Prize speech of 1975 narrates a similar story about its reception. But are these accounts warranted? I argue that members of the virology community opposed Temin's provirus hypothesis not simply because it was a counterexample to the central dogma, but more importantly because his experimental evidence for supporting it was inconclusive. Furthermore, I propose that these accounts of opposition to the DNA provirus hypothesis as heretical, written by Temin and others after the discovery of reverse transcriptase, played a significant role in establishing retrovirology as a specialized field.

  12. Alternatives to the linear risk hypothesis

    International Nuclear Information System (INIS)

    Craig, A.G.

    1976-01-01

    A theoretical argument is presented which suggests that in using the linear hypothesis for all values of LET the low dose risk is overestimated for low LET but that it is underestimated for very high LET. The argument is based upon the idea that cell lesions which do not lead to cell death may in fact lead to a malignant cell. Expressions for the Surviving Fraction and the Cancer Risk based on this argument are given. An advantage of this very general approach is that is expresses cell survival and cancer risk entirely in terms of the cell lesions and avoids the rather contentious argument as to how the average number of lesions should be related to the dose. (U.K.)

  13. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  14. Artistic talent in dyslexia--a hypothesis.

    Science.gov (United States)

    Chakravarty, Ambar

    2009-10-01

    The present article hints at a curious neurocognitive phenomenon of development of artistic talents in some children with dyslexia. The article also takes note of the phenomenon of creating in the midst of language disability as observed in the lives of such creative people like Leonardo da Vinci and Albert Einstein who were most probably affected with developmental learning disorders. It has been hypothesised that a developmental delay in the dominant hemisphere most likely 'disinhibits' the non-dominant parietal lobe to unmask talents, artistic or otherwise, in some such individuals. The present hypothesis follows the phenomenon of paradoxical functional facilitation described earlier. It has been suggested that children with learning disorders be encouraged to develop such hidden talents to full capacity, rather than be subjected to overemphasising on the correction of the disturbed coded symbol operations, in remedial training.

  15. Tissue misrepair hypothesis for radiation carcinogenesis

    International Nuclear Information System (INIS)

    Kondo, Sohei

    1991-01-01

    Dose-response curves for chronic leukemia in A-bomb survivors and liver tumors in patients given Thorotrast (colloidal thorium dioxide) show large threshold effects. The existence of these threshold effects can be explained by the following hypothesis. A high dose of radiation causes a persistent wound in a cellrenewable tissue. Disorder of the injured cell society partly frees the component cells from territorial restraints on their proliferation, enabling them to continue development of their cellular functions toward advanced autonomy. This progression might be achieved by continued epigenetic and genetic changes as a result of occasional errors in the otherwise concerted healing action of various endogeneous factors recruited for tissue repair. Carcinogenesis is not simply a single-cell problem but a cell-society problem. Therefore, it is not warranted to estimate risk at low doses by linear extrapolation from cancer data at high doses without knowledge of the mechanism of radiation carcinogenesis. (author) 57 refs

  16. Statistical hypothesis tests of some micrometeorological observations

    International Nuclear Information System (INIS)

    SethuRaman, S.; Tichler, J.

    1977-01-01

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality

  17. The hexagon hypothesis: Six disruptive scenarios.

    Science.gov (United States)

    Burtles, Jim

    2015-01-01

    This paper aims to bring a simple but effective and comprehensive approach to the development, delivery and monitoring of business continuity solutions. To ensure that the arguments and principles apply across the board, the paper sticks to basic underlying concepts rather than sophisticated interpretations. First, the paper explores what exactly people are defending themselves against. Secondly, the paper looks at how defences should be set up. Disruptive events tend to unfold in phases, each of which invites a particular style of protection, ranging from risk management through to business continuity to insurance cover. Their impact upon any business operation will fall into one of six basic scenarios. The hexagon hypothesis suggests that everyone should be prepared to deal with each of these six disruptive scenarios and it provides them with a useful benchmark for business continuity.

  18. Novae, supernovae, and the island universe hypothesis

    International Nuclear Information System (INIS)

    Van Den Bergh, S.

    1988-01-01

    Arguments in Curtis's (1917) paper related to the island universe hypothesis and the existence of novae in spiral nebulae are considered. It is noted that the maximum magnitude versus rate-of-decline relation for novae may be the best tool presently available for the calibration of the extragalactic distance scale. Light curve observations of six novae are used to determine a distance of 18.6 + or - 3.5 MPc to the Virgo cluster. Results suggest that Type Ia supernovae cannot easily be used as standard candles, and that Type II supernovae are unsuitable as distance indicators. Factors other than precursor mass are probably responsible for determining the ultimate fate of evolving stars. 83 references

  19. Extra dimensions hypothesis in high energy physics

    Directory of Open Access Journals (Sweden)

    Volobuev Igor

    2017-01-01

    Full Text Available We discuss the history of the extra dimensions hypothesis and the physics and phenomenology of models with large extra dimensions with an emphasis on the Randall- Sundrum (RS model with two branes. We argue that the Standard Model extension based on the RS model with two branes is phenomenologically acceptable only if the inter-brane distance is stabilized. Within such an extension of the Standard Model, we study the influence of the infinite Kaluza-Klein (KK towers of the bulk fields on collider processes. In particular, we discuss the modification of the scalar sector of the theory, the Higgs-radion mixing due to the coupling of the Higgs boson to the radion and its KK tower, and the experimental restrictions on the mass of the radion-dominated states.

  20. Multiple model cardinalized probability hypothesis density filter

    Science.gov (United States)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  1. On the immunostimulatory hypothesis of cancer

    Directory of Open Access Journals (Sweden)

    Juan Bruzzo

    2011-12-01

    Full Text Available There is a rather generalized belief that the worst possible outcome for the application of immunological therapies against cancer is a null effect on tumor growth. However, a significant body of evidence summarized in the immunostimulatory hypothesis of cancer suggests that, upon certain circumstances, the growth of incipient and established tumors can be accelerated rather than inhibited by the immune response supposedly mounted to limit tumor growth. In order to provide more compelling evidence of this proposition, we have explored the growth behavior characteristics of twelve murine tumors -most of them of spontaneous origin- arisen in the colony of our laboratory, in putatively immunized and control mice. Using classical immunization procedures, 8 out of 12 tumors were actually stimulated in "immunized" mice while the remaining 4 were neither inhibited nor stimulated. Further, even these apparently non-antigenic tumors could reveal some antigenicity if more stringent than classical immunization procedures were used. This possibility was suggested by the results obtained with one of these four apparently non-antigenic tumors: the LB lymphoma. In effect, upon these stringent immunization pretreatments, LB was slightly inhibited or stimulated, depending on the titer of the immune reaction mounted against the tumor, with higher titers rendering inhibition and lower titers rendering tumor stimulation. All the above results are consistent with the immunostimulatory hypothesis that entails the important therapeutic implications -contrary to the orthodoxy- that, anti-tumor vaccines may run a real risk of doing harm if the vaccine-induced immunity is too weak to move the reaction into the inhibitory part of the immune response curve and that, a slight and prolonged immunodepression -rather than an immunostimulation- might interfere with the progression of some tumors and thus be an aid to cytotoxic therapies.

  2. The Stress Acceleration Hypothesis of Nightmares

    Directory of Open Access Journals (Sweden)

    Tore Nielsen

    2017-06-01

    Full Text Available Adverse childhood experiences can deleteriously affect future physical and mental health, increasing risk for many illnesses, including psychiatric problems, sleep disorders, and, according to the present hypothesis, idiopathic nightmares. Much like post-traumatic nightmares, which are triggered by trauma and lead to recurrent emotional dreaming about the trauma, idiopathic nightmares are hypothesized to originate in early adverse experiences that lead in later life to the expression of early memories and emotions in dream content. Accordingly, the objectives of this paper are to (1 review existing literature on sleep, dreaming and nightmares in relation to early adverse experiences, drawing upon both empirical studies of dreaming and nightmares and books and chapters by recognized nightmare experts and (2 propose a new approach to explaining nightmares that is based upon the Stress Acceleration Hypothesis of mental illness. The latter stipulates that susceptibility to mental illness is increased by adversity occurring during a developmentally sensitive window for emotional maturation—the infantile amnesia period—that ends around age 3½. Early adversity accelerates the neural and behavioral maturation of emotional systems governing the expression, learning, and extinction of fear memories and may afford short-term adaptive value. But it also engenders long-term dysfunctional consequences including an increased risk for nightmares. Two mechanisms are proposed: (1 disruption of infantile amnesia allows normally forgotten early childhood memories to influence later emotions, cognitions and behavior, including the common expression of threats in nightmares; (2 alterations of normal emotion regulation processes of both waking and sleep lead to increased fear sensitivity and less effective fear extinction. These changes influence an affect network previously hypothesized to regulate fear extinction during REM sleep, disruption of which leads to

  3. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    International Nuclear Information System (INIS)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik

    2015-01-01

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles

  4. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  5. Revisiting Inter-Genre Similarity

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Gouyon, Fabien

    2013-01-01

    We revisit the idea of ``inter-genre similarity'' (IGS) for machine learning in general, and music genre recognition in particular. We show analytically that the probability of error for IGS is higher than naive Bayes classification with zero-one loss (NB). We show empirically that IGS does...... not perform well, even for data that satisfies all its assumptions....

  6. Fast business process similarity search

    NARCIS (Netherlands)

    Yan, Z.; Dijkman, R.M.; Grefen, P.W.P.J.

    2012-01-01

    Nowadays, it is common for organizations to maintain collections of hundreds or even thousands of business processes. Techniques exist to search through such a collection, for business process models that are similar to a given query model. However, those techniques compare the query model to each

  7. Glove boxes and similar containments

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    According to the present invention a glove box or similar containment is provided with an exhaust system including a vortex amplifier venting into the system, the vortex amplifier also having its main inlet in fluid flow connection with the containment and a control inlet in fluid flow connection with the atmosphere outside the containment. (U.S.)

  8. Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.

    Science.gov (United States)

    Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael

    2007-09-07

    Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.

  9. An Alfven eigenmode similarity experiment

    International Nuclear Information System (INIS)

    Heidbrink, W W; Fredrickson, E; Gorelenkov, N N; Hyatt, A W; Kramer, G; Luo, Y

    2003-01-01

    The major radius dependence of Alfven mode stability is studied by creating plasmas with similar minor radius, shape, magnetic field (0.5 T), density (n e ≅3x10 19 m -3 ), electron temperature (1.0 keV) and beam ion population (near-tangential 80 keV deuterium injection) on both NSTX and DIII-D. The major radius of NSTX is half the major radius of DIII-D. The super-Alfvenic beam ions that drive the modes have overlapping values of v f /v A in the two devices. Observed beam-driven instabilities include toroidicity-induced Alfven eigenmodes (TAE). The stability threshold for the TAE is similar in the two devices. As expected theoretically, the most unstable toroidal mode number n is larger in DIII-D

  10. Log-periodic self-similarity: an emerging financial law?

    OpenAIRE

    S. Drozdz; F. Grummer; F. Ruf; J. Speth

    2002-01-01

    A hypothesis that the financial log-periodicity, cascading self-similarity through various time scales, carries signatures of a law is pursued. It is shown that the most significant historical financial events can be classified amazingly well using a single and unique value of the preferred scaling factor lambda=2, which indicates that its real value should be close to this number. This applies even to a declining decelerating log-periodic phase. Crucial in this connection is identification o...

  11. Naked singularities in self-similar spherical gravitational collapse

    International Nuclear Information System (INIS)

    Ori, A.; Piran, T.

    1987-01-01

    We present general-relativistic solutions of self-similar spherical collapse of an adiabatic perfect fluid. We show that if the equation of state is soft enough (Γ-1<<1), a naked singularity forms. The singularity resembles the shell-focusing naked singularities that arise in dust collapse. This solution increases significantly the range of matter fields that should be ruled out in order that the cosmic-censorship hypothesis will hold

  12. Compressional Alfven Eigenmode Similarity Study

    Science.gov (United States)

    Heidbrink, W. W.; Fredrickson, E. D.; Gorelenkov, N. N.; Rhodes, T. L.

    2004-11-01

    NSTX and DIII-D are nearly ideal for Alfven eigenmode (AE) similarity experiments, having similar neutral beams, fast-ion to Alfven speed v_f/v_A, fast-ion pressure, and shape of the plasma, but with a factor of 2 difference in the major radius. Toroidicity-induced AE with ˜100 kHz frequencies were compared in an earlier study [1]; this paper focuses on higher frequency AE with f ˜ 1 MHz. Compressional AE (CAE) on NSTX have a polarization, dependence on the fast-ion distribution function, frequency scaling, and low-frequency limit that are qualitatively consistent with CAE theory [2]. Global AE (GAE) are also observed. On DIII-D, coherent modes in this frequency range are observed during low-field (0.6 T) similarity experiments. Experiments will compare the CAE stability limits on DIII-D with the NSTX stability limits, with the aim of determining if CAE will be excited by alphas in a reactor. Predicted differences in the frequency splitting Δ f between excited modes will also be used. \\vspace0.25em [1] W.W. Heidbrink, et al., Plasmas Phys. Control. Fusion 45, 983 (2003). [2] E.D. Fredrickson, et al., Princeton Plasma Physics Laboratory Report PPPL-3955 (2004).

  13. The Matter-Gravity Entanglement Hypothesis

    Science.gov (United States)

    Kay, Bernard S.

    2018-03-01

    I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.

  14. The Matter-Gravity Entanglement Hypothesis

    Science.gov (United States)

    Kay, Bernard S.

    2018-05-01

    I outline some of my work and results (some dating back to 1998, some more recent) on my matter-gravity entanglement hypothesis, according to which the entropy of a closed quantum gravitational system is equal to the system's matter-gravity entanglement entropy. The main arguments presented are: (1) that this hypothesis is capable of resolving what I call the second-law puzzle, i.e. the puzzle as to how the entropy increase of a closed system can be reconciled with the asssumption of unitary time-evolution; (2) that the black hole information loss puzzle may be regarded as a special case of this second law puzzle and that therefore the same resolution applies to it; (3) that the black hole thermal atmosphere puzzle (which I recall) can be resolved by adopting a radically different-from-usual description of quantum black hole equilibrium states, according to which they are total pure states, entangled between matter and gravity in such a way that the partial states of matter and gravity are each approximately thermal equilibrium states (at the Hawking temperature); (4) that the Susskind-Horowitz-Polchinski string-theoretic understanding of black hole entropy as the logarithm of the degeneracy of a long string (which is the weak string coupling limit of a black hole) cannot be quite correct but should be replaced by a modified understanding according to which it is the entanglement entropy between a long string and its stringy atmosphere, when in a total pure equilibrium state in a suitable box, which (in line with (3)) goes over, at strong-coupling, to a black hole in equilibrium with its thermal atmosphere. The modified understanding in (4) is based on a general result, which I also describe, which concerns the likely state of a quantum system when it is weakly coupled to an energy-bath and the total state is a random pure state with a given energy. This result generalizes Goldstein et al.'s `canonical typicality' result to systems which are not necessarily small.

  15. Hypothesis test for synchronization: twin surrogates revisited.

    Science.gov (United States)

    Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf

    2009-03-01

    The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.

  16. Marginal contrasts and the Contrastivist Hypothesis

    Directory of Open Access Journals (Sweden)

    Daniel Currie Hall

    2016-12-01

    Full Text Available The Contrastivist Hypothesis (CH; Hall 2007; Dresher 2009 holds that the only features that can be phonologically active in any language are those that serve to distinguish phonemes, which presupposes that phonemic status is categorical. Many researchers, however, demonstrate the existence of gradient relations. For instance, Hall (2009 quantifies these using the information-theoretic measure of entropy (unpredictability of distribution and shows that a pair of sounds may have an entropy between 0 (totally predictable and 1 (totally unpredictable. We argue that the existence of such intermediate degrees of contrastiveness does not make the CH untenable, but rather offers insight into contrastive hierarchies. The existence of a continuum does not preclude categorical distinctions: a categorical line can be drawn between zero entropy (entirely predictable, and thus by the CH phonologically inactive and non-zero entropy (at least partially contrastive, and thus potentially phonologically active. But this does not mean that intermediate degrees of surface contrastiveness are entirely irrelevant to the CH; rather, we argue, they can shed light on how deeply ingrained a phonemic distinction is in the phonological system. As an example, we provide a case study from Pulaar [ATR] harmony, which has previously been claimed to be problematic for the CH.

  17. The Stem Cell Hypothesis of Aging

    Directory of Open Access Journals (Sweden)

    Anna Meiliana

    2010-04-01

    Full Text Available BACKGROUND: There is probably no single way to age. Indeed, so far there is no single accepted explanation or mechanisms of aging (although more than 300 theories have been proposed. There is an overall decline in tissue regenerative potential with age, and the question arises as to whether this is due to the intrinsic aging of stem cells or rather to the impairment of stem cell function in the aged tissue environment. CONTENT: Recent data suggest that we age, in part, because our self-renewing stem cells grow old as a result of heritable intrinsic events, such as DNA damage, as well as extrinsic forces, such as changes in their supporting niches. Mechanisms that suppress the development of cancer, such as senescence and apoptosis, which rely on telomere shortening and the activities of p53 and p16INK4a may also induce an unwanted consequence: a decline in the replicative function of certain stem cells types with advancing age. This decrease regenerative capacity appears to pointing to the stem cell hypothesis of aging. SUMMARY: Recent evidence suggested that we grow old partly because of our stem cells grow old as a result of mechanisms that suppress the development of cancer over a lifetime. We believe that a further, more precise mechanistic understanding of this process will be required before this knowledge can be translated into human anti-aging therapies. KEYWORDS: stem cells, senescence, telomere, DNA damage, epigenetic, aging.

  18. Confabulation: Developing the 'emotion dysregulation' hypothesis.

    Science.gov (United States)

    Turnbull, Oliver H; Salas, Christian E

    2017-02-01

    Confabulations offer unique opportunities for establishing the neurobiological basis of delusional thinking. As regards causal factors, a review of the confabulation literature suggests that neither amnesia nor executive impairment can be the sole (or perhaps even the primary) cause of all delusional beliefs - though they may act in concert with other factors. A key perspective in the modern literature is that many delusions have an emotionally positive or 'wishful' element, that may serve to modulate or manage emotional experience. Some authors have referred to this perspective as the 'emotion dysregulation' hypothesis. In this article we review the theoretical underpinnings of this approach, and develop the idea by suggesting that the positive aspects of confabulatory states may have a role in perpetuating the imbalance between cognitive control and emotion. We draw on existing evidence from fields outside neuropsychology, to argue for three main causal factors: that positive emotions are related to more global or schematic forms of cognitive processing; that positive emotions influence the accuracy of memory recollection; and that positive emotions make people more susceptible to false memories. These findings suggest that the emotions that we want to feel (or do not want to feel) can influence the way we reconstruct past experiences and generate a sense of self - a proposition that bears on a unified theory of delusional belief states. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  19. Evolutionary hypothesis for Chiari type I malformation.

    Science.gov (United States)

    Fernandes, Yvens Barbosa; Ramina, Ricardo; Campos-Herrera, Cynthia Resende; Borges, Guilherme

    2013-10-01

    Chiari I malformation (CM-I) is classically defined as a cerebellar tonsillar herniation (≥5 mm) through the foramen magnum. A decreased posterior fossa volume, mainly due to basioccipital hypoplasia and sometimes platybasia, leads to posterior fossa overcrowding and consequently cerebellar herniation. Regardless of radiological findings, embryological genetic hypothesis or any other postulations, the real cause behind this malformation is yet not well-elucidated and remains largely unknown. The aim of this paper is to approach CM-I under a broader and new perspective, conjoining anthropology, genetics and neurosurgery, with special focus on the substantial changes that have occurred in the posterior cranial base through human evolution. Important evolutionary allometric changes occurred during brain expansion and genetics studies of human evolution demonstrated an unexpected high rate of gene flow interchange and possibly interbreeding during this process. Based upon this review we hypothesize that CM-I may be the result of an evolutionary anthropological imprint, caused by evolving species populations that eventually met each other and mingled in the last 1.7 million years. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Environmental Kuznets Curve Hypothesis. A Survey

    International Nuclear Information System (INIS)

    Dinda, Soumyananda

    2004-01-01

    The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (1) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (2) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique

  1. DAMPs, ageing, and cancer: The 'DAMP Hypothesis'.

    Science.gov (United States)

    Huang, Jin; Xie, Yangchun; Sun, Xiaofang; Zeh, Herbert J; Kang, Rui; Lotze, Michael T; Tang, Daolin

    2015-11-01

    Ageing is a complex and multifactorial process characterized by the accumulation of many forms of damage at the molecular, cellular, and tissue level with advancing age. Ageing increases the risk of the onset of chronic inflammation-associated diseases such as cancer, diabetes, stroke, and neurodegenerative disease. In particular, ageing and cancer share some common origins and hallmarks such as genomic instability, epigenetic alteration, aberrant telomeres, inflammation and immune injury, reprogrammed metabolism, and degradation system impairment (including within the ubiquitin-proteasome system and the autophagic machinery). Recent advances indicate that damage-associated molecular pattern molecules (DAMPs) such as high mobility group box 1, histones, S100, and heat shock proteins play location-dependent roles inside and outside the cell. These provide interaction platforms at molecular levels linked to common hallmarks of ageing and cancer. They can act as inducers, sensors, and mediators of stress through individual plasma membrane receptors, intracellular recognition receptors (e.g., advanced glycosylation end product-specific receptors, AIM2-like receptors, RIG-I-like receptors, and NOD1-like receptors, and toll-like receptors), or following endocytic uptake. Thus, the DAMP Hypothesis is novel and complements other theories that explain the features of ageing. DAMPs represent ideal biomarkers of ageing and provide an attractive target for interventions in ageing and age-associated diseases. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Identity of Particles and Continuum Hypothesis

    Science.gov (United States)

    Berezin, Alexander A.

    2001-04-01

    Why all electrons are the same? Unlike other objects, particles and atoms (same isotopes) are forbidden to have individuality or personal history (or reveal their hidden variables, even if they do have them). Or at least, what we commonly call physics so far was unable to disprove particle's sameness (Berezin and Nakhmanson, Physics Essays, 1990). Consider two opposing hypotheses: (A) particles are indeed absolutely same, or (B) they do have individuality, but it is beyond our capacity to demonstrate. This dilemma sounds akin to undecidability of Continuum Hypothesis of existence (or not) of intermediate cardinalities between integers and reals (P.Cohen). Both yes and no of it are true. Thus, (alleged) sameness of electrons and atoms may be a physical translation (embodiment) of this fundamental Goedelian undecidability. Experiments unlikely to help: even if we find that all electrons are same within 30 decimal digits, could their masses (or charges) still differ in100-th digit? Within (B) personalized informationally rich (infinitely rich?) digital tails (starting at, say, 100-th decimal) may carry individual record of each particle history. Within (A) parameters (m, q) are indeed exactly same in all digits and their sameness is based on some inherent (meta)physical principle akin to Platonism or Eddington-type numerology.

  3. Environmental Kuznets Curve Hypothesis. A Survey

    Energy Technology Data Exchange (ETDEWEB)

    Dinda, Soumyananda [Economic Research Unit, Indian Statistical Institute, 203, B.T. Road, Kolkata-108 (India)

    2004-08-01

    The Environmental Kuznets Curve (EKC) hypothesis postulates an inverted-U-shaped relationship between different pollutants and per capita income, i.e., environmental pressure increases up to a certain level as income goes up; after that, it decreases. An EKC actually reveals how a technically specified measurement of environmental quality changes as the fortunes of a country change. A sizeable literature on EKC has grown in recent period. The common point of all the studies is the assertion that the environmental quality deteriorates at the early stages of economic development/growth and subsequently improves at the later stages. In other words, environmental pressure increases faster than income at early stages of development and slows down relative to GDP growth at higher income levels. This paper reviews some theoretical developments and empirical studies dealing with EKC phenomenon. Possible explanations for this EKC are seen in (1) the progress of economic development, from clean agrarian economy to polluting industrial economy to clean service economy; (2) tendency of people with higher income having higher preference for environmental quality, etc. Evidence of the existence of the EKC has been questioned from several corners. Only some air quality indicators, especially local pollutants, show the evidence of an EKC. However, an EKC is empirically observed, till there is no agreement in the literature on the income level at which environmental degradation starts declining. This paper provides an overview of the EKC literature, background history, conceptual insights, policy and the conceptual and methodological critique.

  4. Similarity analysis between quantum images

    Science.gov (United States)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  5. Similarity flows in relativistic hydrodynamics

    International Nuclear Information System (INIS)

    Blaizot, J.P.; Ollitrault, J.Y.

    1986-01-01

    In ultra-relativistic heavy ion collisions, one expects in particular to observe a deconfinement transition leading to a formation of quark gluon plasma. In the framework of the hydrodynamic model, experimental signatures of such a plasma may be looked for as observable consequences of a first order transition on the evolution of the system. In most of the possible scenario, the phase transition is accompanied with discontinuities in the hydrodynamic flow, such as shock waves. The method presented in this paper has been developed to treat without too much numerical effort such discontinuous flow. It relies heavily on the use of similarity solutions of the hydrodynamic equations

  6. What Constitutes Science and Scientific Evidence: Roles of Null Hypothesis Testing

    Science.gov (United States)

    Chang, Mark

    2017-01-01

    We briefly discuss the philosophical basis of science, causality, and scientific evidence, by introducing the hidden but most fundamental principle of science: the similarity principle. The principle's use in scientific discovery is illustrated with Simpson's paradox and other examples. In discussing the value of null hypothesis statistical…

  7. A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING

    OpenAIRE

    Temel, Tugrul T.

    2001-01-01

    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.

  8. Self-similar gravitational clustering

    International Nuclear Information System (INIS)

    Efstathiou, G.; Fall, S.M.; Hogan, C.

    1979-01-01

    The evolution of gravitational clustering is considered and several new scaling relations are derived for the multiplicity function. These include generalizations of the Press-Schechter theory to different densities and cosmological parameters. The theory is then tested against multiplicity function and correlation function estimates for a series of 1000-body experiments. The results are consistent with the theory and show some dependence on initial conditions and cosmological density parameter. The statistical significance of the results, however, is fairly low because of several small number effects in the experiments. There is no evidence for a non-linear bootstrap effect or a dependence of the multiplicity function on the internal dynamics of condensed groups. Empirical estimates of the multiplicity function by Gott and Turner have a feature near the characteristic luminosity predicted by the theory. The scaling relations allow the inference from estimates of the galaxy luminosity function that galaxies must have suffered considerable dissipation if they originally formed from a self-similar hierarchy. A method is also developed for relating the multiplicity function to similar measures of clustering, such as those of Bhavsar, for the distribution of galaxies on the sky. These are shown to depend on the luminosity function in a complicated way. (author)

  9. Updating the mild encephalitis hypothesis of schizophrenia.

    Science.gov (United States)

    Bechter, K

    2013-04-05

    Schizophrenia seems to be a heterogeneous disorder. Emerging evidence indicates that low level neuroinflammation (LLNI) may not occur infrequently. Many infectious agents with low overall pathogenicity are risk factors for psychoses including schizophrenia and for autoimmune disorders. According to the mild encephalitis (ME) hypothesis, LLNI represents the core pathogenetic mechanism in a schizophrenia subgroup that has syndromal overlap with other psychiatric disorders. ME may be triggered by infections, autoimmunity, toxicity, or trauma. A 'late hit' and gene-environment interaction are required to explain major findings about schizophrenia, and both aspects would be consistent with the ME hypothesis. Schizophrenia risk genes stay rather constant within populations despite a resulting low number of progeny; this may result from advantages associated with risk genes, e.g., an improved immune response, which may act protectively within changing environments, although they are associated with the disadvantage of increased susceptibility to psychotic disorders. Specific schizophrenic symptoms may arise with instances of LLNI when certain brain functional systems are involved, in addition to being shaped by pre-existing liability factors. Prodrome phase and the transition to a diseased status may be related to LLNI processes emerging and varying over time. The variability in the course of schizophrenia resembles the varying courses of autoimmune disorders, which result from three required factors: genes, the environment, and the immune system. Preliminary criteria for subgrouping neurodevelopmental, genetic, ME, and other types of schizophrenias are provided. A rare example of ME schizophrenia may be observed in Borna disease virus infection. Neurodevelopmental schizophrenia due to early infections has been estimated by others to explain approximately 30% of cases, but the underlying pathomechanisms of transition to disease remain in question. LLNI (e.g. from

  10. [Psychodynamic hypothesis about suicidality in elderly men].

    Science.gov (United States)

    Lindner, Reinhard

    2010-08-01

    Old men are overrepresented in the whole of all suicides. In contrast, only very few elderly men find their way to specialised treatment facilities. Elderly accept psychotherapy more rarely than younger persons. Therefore presentations on the psychodynamics of suicidality in old men are rare and mostly casuistical. By means of a stepwise reconstructable qualitative case comparison of five randomly chosen elderly suicidal men with ideal types of suicidal (younger) men concerning biography, suicidal symptoms and transference, psychodynamic hypothesis of suicidality in elderly men are developed. All patients came into psychotherapy in a specialised academic out-patient clinic for psychodynamic treatment of acute and chronic suicidality. The five elderly suicidal men predominantly were living in long-term, conflictuous sexual relationships and also had ambivalent relationships to their children. Suicidality in old age refers to lifelong existing intrapsychic conflicts, concerning (male) identity, self-esteem and a core conflict between fusion and separation wishes. The body gets a central role in suicidal experiences, being a defensive instance modified by age and/or physical illness, which brings up to consciousness aggressive and envious impulses, but also feelings of emptiness and insecurity, which have to be warded off again by projection into the body. In transference relationships there are on the one hand the regular transference, on the other hand an age specific turned around transference, with their counter transference reactions. The chosen methodological approach serves the systematic finding of hypotheses with a higher degree in evidence than hypotheses generated from single case studies. Georg Thieme Verlag KG Stuttgart - New York.

  11. Atopic dermatitis and the hygiene hypothesis revisited.

    Science.gov (United States)

    Flohr, Carsten; Yeo, Lindsey

    2011-01-01

    We published a systematic review on atopic dermatitis (AD) and the hygiene hypothesis in 2005. Since then, the body of literature has grown significantly. We therefore repeated our systematic review to examine the evidence from population-based studies for an association between AD risk and specific infections, childhood immunizations, the use of antibiotics and environmental exposures that lead to a change in microbial burden. Medline was searched from 1966 until June 2010 to identify relevant studies. We found an additional 49 papers suitable for inclusion. There is evidence to support an inverse relationship between AD and endotoxin, early day care, farm animal and dog exposure in early life. Cat exposure in the presence of skin barrier impairment is positively associated with AD. Helminth infection at least partially protects against AD. This is not the case for viral and bacterial infections, but consumption of unpasteurized farm milk seems protective. Routine childhood vaccinations have no effect on AD risk. The positive association between viral infections and AD found in some studies appears confounded by antibiotic prescription, which has been consistently associated with an increase in AD risk. There is convincing evidence for an inverse relationship between helminth infections and AD but no other pathogens. The protective effect seen with early day care, endotoxin, unpasteurized farm milk and animal exposure is likely to be due to a general increase in exposure to non-pathogenic microbes. This would also explain the risk increase associated with the use of broad-spectrum antibiotics. Future studies should assess skin barrier gene mutation carriage and phenotypic skin barrier impairment, as gene-environment interactions are likely to impact on AD risk. Copyright © 041_ S. Karger AG, Basel.

  12. Proform-Antecedent Linking in Individuals with Agrammatic Aphasia: A Test of the Intervener Hypothesis.

    Science.gov (United States)

    Engel, Samantha; Shapiro, Lewis P; Love, Tracy

    2018-02-01

    To evaluate processing and comprehension of pronouns and reflexives in individuals with agrammatic (Broca's) aphasia and age-matched control participants. Specifically, we evaluate processing and comprehension patterns in terms of a specific hypothesis -- the Intervener Hypothesis - that posits that the difficulty of individuals with agrammatic (Broca's) aphasia results from similarity-based interference caused by the presence of an intervening NP between two elements of a dependency chain. We used an eye tracking-while-listening paradigm to investigate real-time processing (Experiment 1) and a sentence-picture matching task to investigate final interpretive comprehension (Experiment 2) of sentences containing proforms in complement phrase and subject relative constructions. Individuals with agrammatic aphasia demonstrated a greater proportion of gazes to the correct referent of reflexives relative to pronouns and significantly greater comprehension accuracy of reflexives relative to pronouns. These results provide support for the Intervener Hypothesis, previous support for which comes from studies of Wh- questions and unaccusative verbs, and we argue that this account provides an explanation for the deficits of individuals with agrammatic aphasia across a growing set of sentence constructions. The current study extends this hypothesis beyond filler-gap dependencies to referential dependencies and allows us to refine the hypothesis in terms of the structural constraints that meet the description of the Intervener Hypothesis.

  13. Cosmological Constant and the Final Anthropic Hypothesis

    OpenAIRE

    Cirkovic, Milan M.; Bostrom, Nick

    1999-01-01

    The influence of recent detections of a finite vacuum energy ("cosmological constant") on our formulation of anthropic conjectures, particularly the so-called Final Anthropic Principle is investigated. It is shown that non-zero vacuum energy implies the onset of a quasi-exponential expansion of our causally connected domain ("the universe") at some point in the future, a stage similar to the inflationary expansion at the very beginning of time. The transition to this future inflationary phase...

  14. Seniority bosons from similarity transformations

    International Nuclear Information System (INIS)

    Geyer, H.B.

    1986-01-01

    The requirement of associating in the boson space seniority with twice the number of non-s bosons defines a similarity transformation which re-expresses the Dyson pair boson images in terms of seniority bosons. In particular the fermion S-pair creation operator is mapped onto an operator which, unlike the pair boson image, does not change the number of non-s bosons. The original results of Otsuka, Arima and Iachello are recovered by this procedure while at the same time they are generalized to include g-bosons or even bosons with J>4 as well as any higher order boson terms. Furthermore the seniority boson images are valid for an arbitrary number of d- or g-bosons - a result which is not readily obtainable within the framework of the usual Marumori- or OAI-method

  15. Lynn White Jr. and the greening-of-religion hypothesis.

    Science.gov (United States)

    Taylor, Bron; Van Wieren, Gretel; Zaleha, Bernard Daley

    2016-10-01

    Lynn White Jr.'s "The Historical Roots of Our Ecologic Crisis," which was published in Science in 1967, has played a critical role in precipitating interdisciplinary environmental studies. Although White advances a multifaceted argument, most respondents focus on his claim that the Judeo-Christian tradition, especially Christianity, has promoted anthropocentric attitudes and environmentally destructive behaviors. Decades later, some scholars argue contrarily that Christianity in particular and the world's predominant religions in general are becoming more environmentally friendly, known as the greening-of-religion hypothesis. To test these claims, we conducted a comprehensive review of over 700 articles-historical, qualitative, and quantitative-that are pertinent to them. Although definitive conclusions are difficult, we identified many themes and dynamics that hinder environmental understanding and mobilization, including conservative theological orientations and beliefs about the role of divine agency in preventing or promoting natural events, whether the religion is an Abrahamic tradition or originated in Asia. On balance, we found the thrust of White's thesis is supported, whereas the greening-of-religion hypothesis is not. We also found that indigenous traditions often foster proenvironmental perceptions. This finding suggests that indigenous traditions may be more likely to be proenvironmental than other religious systems and that some nature-based cosmologies and value systems function similarly. Although we conclude White's thesis and subsequent claims are largely born out, additional research is needed to better understand under what circumstances and communication strategies religious or other individuals and groups may be more effectively mobilized to respond to contemporary environmental challenges. © 2016 Society for Conservation Biology.

  16. A hypothesis of coevolution between cooperation and responses to inequity

    Directory of Open Access Journals (Sweden)

    Sarah F Brosnan

    2011-04-01

    Full Text Available Recent evidence demonstrates that humans are not the only species to respond negatively to inequitable outcomes which are to their disadvantage. Several species respond negatively if they subsequently receive a less good reward than a social partner for completing the same task. While these studies suggest that the negative response to inequity is not a uniquely human behavior, they do not provide a functional explanation for the emergence of these responses due to similar characteristics among these species. Emerging data support the hypothesis that an aversion to inequity is a mechanism to promote successful long-term cooperative relationships amongst non-kin. In this paper, I discuss several converging lines of evidence which illustrate the need to further evaluate this relationship. First, cooperation can survive modest inequity; in explicitly cooperative interactions, individuals are willing to continue to cooperate despite inequitable outcomes as long as the partner’s overall behavior is equitable. Second, the context of inequity affects reactions to it in ways which support the idea that joint efforts lead to an expectation of joint payoffs. Finally, comparative studies indicate a link between the degree and extent of cooperation between unrelated individuals in a species and that species’ response to inequitable outcomes. This latter line of evidence indicates that this behavior evolved in conjunction with cooperation and may represent an adaptation to increase the payoffs associated with cooperative interactions. Together these data inform a testable working hypothesis for understanding decision-making in the context of inequity and provide a new, comparative framework for evaluating decision-making behavior.

  17. The zinc dyshomeostasis hypothesis of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Travis J A Craddock

    Full Text Available Alzheimer's disease (AD is the most common form of dementia in the elderly. Hallmark AD neuropathology includes extracellular amyloid plaques composed largely of the amyloid-β protein (Aβ, intracellular neurofibrillary tangles (NFTs composed of hyper-phosphorylated microtubule-associated protein tau (MAP-tau, and microtubule destabilization. Early-onset autosomal dominant AD genes are associated with excessive Aβ accumulation, however cognitive impairment best correlates with NFTs and disrupted microtubules. The mechanisms linking Aβ and NFT pathologies in AD are unknown. Here, we propose that sequestration of zinc by Aβ-amyloid deposits (Aβ oligomers and plaques not only drives Aβ aggregation, but also disrupts zinc homeostasis in zinc-enriched brain regions important for memory and vulnerable to AD pathology, resulting in intra-neuronal zinc levels, which are either too low, or excessively high. To evaluate this hypothesis, we 1 used molecular modeling of zinc binding to the microtubule component protein tubulin, identifying specific, high-affinity zinc binding sites that influence side-to-side tubulin interaction, the sensitive link in microtubule polymerization and stability. We also 2 performed kinetic modeling showing zinc distribution in extra-neuronal Aβ deposits can reduce intra-neuronal zinc binding to microtubules, destabilizing microtubules. Finally, we 3 used metallomic imaging mass spectrometry (MIMS to show anatomically-localized and age-dependent zinc dyshomeostasis in specific brain regions of Tg2576 transgenic, mice, a model for AD. We found excess zinc in brain regions associated with memory processing and NFT pathology. Overall, we present a theoretical framework and support for a new theory of AD linking extra-neuronal Aβ amyloid to intra-neuronal NFTs and cognitive dysfunction. The connection, we propose, is based on β-amyloid-induced alterations in zinc ion concentration inside neurons affecting stability of

  18. The zinc dyshomeostasis hypothesis of Alzheimer's disease.

    Science.gov (United States)

    Craddock, Travis J A; Tuszynski, Jack A; Chopra, Deepak; Casey, Noel; Goldstein, Lee E; Hameroff, Stuart R; Tanzi, Rudolph E

    2012-01-01

    Alzheimer's disease (AD) is the most common form of dementia in the elderly. Hallmark AD neuropathology includes extracellular amyloid plaques composed largely of the amyloid-β protein (Aβ), intracellular neurofibrillary tangles (NFTs) composed of hyper-phosphorylated microtubule-associated protein tau (MAP-tau), and microtubule destabilization. Early-onset autosomal dominant AD genes are associated with excessive Aβ accumulation, however cognitive impairment best correlates with NFTs and disrupted microtubules. The mechanisms linking Aβ and NFT pathologies in AD are unknown. Here, we propose that sequestration of zinc by Aβ-amyloid deposits (Aβ oligomers and plaques) not only drives Aβ aggregation, but also disrupts zinc homeostasis in zinc-enriched brain regions important for memory and vulnerable to AD pathology, resulting in intra-neuronal zinc levels, which are either too low, or excessively high. To evaluate this hypothesis, we 1) used molecular modeling of zinc binding to the microtubule component protein tubulin, identifying specific, high-affinity zinc binding sites that influence side-to-side tubulin interaction, the sensitive link in microtubule polymerization and stability. We also 2) performed kinetic modeling showing zinc distribution in extra-neuronal Aβ deposits can reduce intra-neuronal zinc binding to microtubules, destabilizing microtubules. Finally, we 3) used metallomic imaging mass spectrometry (MIMS) to show anatomically-localized and age-dependent zinc dyshomeostasis in specific brain regions of Tg2576 transgenic, mice, a model for AD. We found excess zinc in brain regions associated with memory processing and NFT pathology. Overall, we present a theoretical framework and support for a new theory of AD linking extra-neuronal Aβ amyloid to intra-neuronal NFTs and cognitive dysfunction. The connection, we propose, is based on β-amyloid-induced alterations in zinc ion concentration inside neurons affecting stability of polymerized

  19. Alaska, Gulf spills share similarities

    International Nuclear Information System (INIS)

    Usher, D.

    1991-01-01

    The accidental Exxon Valdez oil spill in Alaska and the deliberate dumping of crude oil into the Persian Gulf as a tactic of war contain both glaring differences and surprising similarities. Public reaction and public response was much greater to the Exxon Valdez spill in pristine Prince William Sound than to the war-related tragedy in the Persian Gulf. More than 12,000 workers helped in the Alaskan cleanup; only 350 have been involved in Kuwait. But in both instances, environmental damages appear to be less than anticipated. Natures highly effective self-cleansing action is primarily responsible for minimizing the damages. One positive action growing out of the two incidents is increased international cooperation and participation in oil-spill clean-up efforts. In 1990, in the aftermath of the Exxon Valdez spill, 94 nations signed an international accord on cooperation in future spills. The spills can be historic environmental landmarks leading to creation of more sophisticated response systems worldwide

  20. Personality traits across countries: Support for similarities rather than differences.

    Science.gov (United States)

    Kajonius, Petri; Mac Giolla, Erik

    2017-01-01

    In the current climate of migration and globalization, personality characteristics of individuals from different countries have received a growing interest. Previous research has established reliable differences in personality traits across countries. The present study extends this research by examining 30 personality traits in 22 countries, based on an online survey in English with large national samples (NTotal = 130,602). The instrument used was a comprehensive, open-source measure of the Five Factor Model (FFM) (IPIP-NEO-120). We postulated that differences in personality traits between countries would be small, labeling this a Similarities Hypothesis. We found support for this in three stages. First, similarities across countries were observed for model fits for each of the five personality trait structures. Second, within-country sex differences for the five personality traits showed similar patterns across countries. Finally, the overall the contribution to personality traits from countries was less than 2%. In other words, the relationship between a country and an individual's personality traits, however interesting, are small. We conclude that the most parsimonious explanation for the current and past findings is a cross-country personality Similarities Hypothesis.

  1. von Neumann's hypothesis concerning coherent states

    International Nuclear Information System (INIS)

    Zak, J

    2003-01-01

    An orthonormal basis of modified coherent states is constructed. Each member of the basis is an infinite sum of coherent states on a von Neumann lattice. A single state is assigned to each unit cell of area h (Planck constant) in the phase plane. The uncertainties of the coordinate x and the square of the momentum p 2 for these states are shown to be similar to those for the usual coherent states. Expansions in the newly established set are discussed and it is shown that any function in the kq-representation can be written as a sum of two fixed kq-functions. Approximate commuting operators for x and p 2 are defined on a lattice in phase plane according to von Neumann's prescription. (leeter to the editor)

  2. Equilibrium-point control hypothesis examined by measured arm stiffness during multijoint movement.

    Science.gov (United States)

    Gomi, H; Kawato

    1996-04-05

    For the last 20 years, it has been hypothesized that well-coordinated, multijoint movements are executed without complex computation by the brain, with the use of springlike muscle properties and peripheral neural feedback loops. However, it has been technically and conceptually difficult to examine this "equilibrium-point control" hypothesis directly in physiological or behavioral experiments. A high-performance manipulandum was developed and used here to measure human arm stiffness, the magnitude of which during multijoint movement is important for this hypothesis. Here, the equilibrium-point trajectory was estimated from the measured stiffness, the actual trajectory, and the generated torque. Its velocity profile differed from that of the actual trajectory. These results argue against the hypothesis that the brain sends as a motor command only an equilibrium-point trajectory similar to the actual trajectory.

  3. Hypothesis of linear relaxation and ion mobility in neutral gases

    International Nuclear Information System (INIS)

    Naudy, Michel

    1980-01-01

    The objective of this research thesis is to propose a theory of ion mobility in neutral gases, based on the hypothesis of linear relaxation, in order to obtain simple formula and a good agreement with experiment. The author first presents some generalities on ion mobility such as history and values of interest, and some notions about the way experimental results are obtained, and then theories proposed from 1903 to 1976. He reports two tests. The first one, based on the Boltzmann equation, is based on a method of moments, and requires the use of a computer, but does not give results in good agreement with the experiment. Thus, for the second test, the author used a kinetic equation similar to one used for the study of neutral gas viscosity. This kinetic equation is used for the study of ion mobility in neutral gases, and the author shows that, with a Sutherland potential, a simple formula can be obtained, the results of which can be obtained with a pocket calculator. Moreover, these results are in agreement with experimental values over a portion of the experimental range. In order to reach an agreement over the whole experimental range, a possibility has been to use, in some cases, a more realistic interaction potential. However, a computer was then necessary [fr

  4. A Blind Test of the Younger Dryas Impact Hypothesis.

    Directory of Open Access Journals (Sweden)

    Vance Holliday

    Full Text Available The Younger Dryas Impact Hypothesis (YDIH states that North America was devastated by some sort of extraterrestrial event ~12,800 calendar years before present. Two fundamental questions persist in the debate over the YDIH: Can the results of analyses for purported impact indicators be reproduced? And are the indicators unique to the lower YD boundary (YDB, i.e., ~12.8k cal yrs BP? A test reported here presents the results of analyses that address these questions. Two different labs analyzed identical splits of samples collected at, above, and below the ~12.8ka zone at the Lubbock Lake archaeological site (LL in northwest Texas. Both labs reported similar variation in levels of magnetic micrograins (>300 mg/kg >12.8ka and <11.5ka, but <150 mg/kg 12.8ka to 11.5ka. Analysis for magnetic microspheres in one split, reported elsewhere, produced very low to nonexistent levels throughout the section. In the other split, reported here, the levels of magnetic microspherules and nanodiamonds are low or nonexistent at, below, and above the YDB with the notable exception of a sample <11,500 cal years old. In that sample the claimed impact proxies were recovered at abundances two to four orders of magnitude above that from the other samples. Reproducibility of at least some analyses are problematic. In particular, no standard criteria exist for identification of magnetic spheres. Moreover, the purported impact proxies are not unique to the YDB.

  5. The recent similarity hypotheses to describe water infiltration into homogeneous soils

    OpenAIRE

    Reichardt,Klaus; Timm,Luís Carlos; Dourado-Neto,Durval

    2016-01-01

    ABSTRACT A similarity hypothesis recently presented to describe horizontal infiltration into homogeneous soils, developed for coarse-textured soils like sieved marine sand, implies that the soil water retention function θ(h) is the mirror image of an extended Boltzmann transform function θ(λ2). A second hypothesis applicable to vertical infiltration suggests that the soil water retention function θ(h) is also the mirror image of the soil water profile θ(z). Using prev...

  6. Men’s Perception of Raped Women: Test of the Sexually Transmitted Disease Hypothesis and the Cuckoldry Hypothesis

    Directory of Open Access Journals (Sweden)

    Prokop Pavol

    2016-06-01

    Full Text Available Rape is a recurrent adaptive problem of female humans and females of a number of non-human animals. Rape has various physiological and reproductive costs to the victim. The costs of rape are furthermore exaggerated by social rejection and blaming of a victim, particularly by men. The negative perception of raped women by men has received little attention from an evolutionary perspective. Across two independent studies, we investigated whether the risk of sexually transmitted diseases (the STD hypothesis, Hypothesis 1 or paternity uncertainty (the cuckoldry hypothesis, Hypothesis 2 influence the negative perception of raped women by men. Raped women received lower attractiveness score than non-raped women, especially in long-term mate attractiveness score. The perceived attractiveness of raped women was not influenced by the presence of experimentally manipulated STD cues on faces of putative rapists. Women raped by three men received lower attractiveness score than women raped by one man. These results provide stronger support for the cuckoldry hypothesis (Hypothesis 2 than for the STD hypothesis (Hypothesis 1. Single men perceived raped women as more attractive than men in a committed relationship (Hypothesis 3, suggesting that the mating opportunities mediate men’s perception of victims of rape. Overall, our results suggest that the risk of cuckoldry underlie the negative perception of victims of rape by men rather than the fear of disease transmission.

  7. New Hypothesis for SOFC Ceramic Oxygen Electrode Mechanisms

    DEFF Research Database (Denmark)

    Mogensen, Mogens Bjerg; Chatzichristodoulou, Christodoulos; Graves, Christopher R.

    2016-01-01

    A new hypothesis for the electrochemical reaction mechanism in solid oxide cell ceramic oxygen electrodes is proposed based on literature including our own results. The hypothesis postulates that the observed thin layers of SrO-La2O3 on top of ceramic perovskite and other Ruddlesden-Popper...

  8. Assess the Critical Period Hypothesis in Second Language Acquisition

    Science.gov (United States)

    Du, Lihong

    2010-01-01

    The Critical Period Hypothesis aims to investigate the reason for significant difference between first language acquisition and second language acquisition. Over the past few decades, researchers carried out a series of studies to test the validity of the hypothesis. Although there were certain limitations in these studies, most of their results…

  9. Dynamical agents' strategies and the fractal market hypothesis

    Czech Academy of Sciences Publication Activity Database

    Vácha, Lukáš; Vošvrda, Miloslav

    2005-01-01

    Roč. 14, č. 2 (2005), s. 172-179 ISSN 1210-0455 Grant - others:GA UK(CZ) 454/2004/A EK/FSV Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agent's investment horizons Subject RIV: AH - Economics

  10. An Exercise for Illustrating the Logic of Hypothesis Testing

    Science.gov (United States)

    Lawton, Leigh

    2009-01-01

    Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…

  11. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA

  12. Adaptation hypothesis of biological efficiency of ionizing radiation

    International Nuclear Information System (INIS)

    Kudritskij, Yu.K.; Georgievskij, A.B.; Karpov, V.I.

    1992-01-01

    Adaptation hypothesis of biological efficiency of ionizing radiation is based on acknowledgement of invariance of fundamental laws and principles of biology related to unity of biota and media, evolution and adaptation for radiobiology. The basic arguments for adaptation hypothesis validity, its correspondence to the requirements imposed on scientific hypothes are presented

  13. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  14. The Younger Dryas impact hypothesis: A critical review

    NARCIS (Netherlands)

    van Hoesel, A.; Hoek, W.Z.; Pennock, G.M.; Drury, Martyn

    2014-01-01

    The Younger Dryas impact hypothesis suggests that multiple extraterrestrial airbursts or impacts resulted in the Younger Dryas cooling, extensive wildfires, megafaunal extinctions and changes in human population. After the hypothesis was first published in 2007, it gained much criticism, as the

  15. Development of similarity theory for control systems

    Science.gov (United States)

    Myshlyaev, L. P.; Evtushenko, V. F.; Ivushkin, K. A.; Makarov, G. V.

    2018-05-01

    The area of effective application of the traditional similarity theory and the need necessity of its development for systems are discussed. The main statements underlying the similarity theory of control systems are given. The conditions for the similarity of control systems and the need for similarity control control are formulated. Methods and algorithms for estimating and similarity control of control systems and the results of research of control systems based on their similarity are presented. The similarity control of systems includes the current evaluation of the degree of similarity of control systems and the development of actions controlling similarity, and the corresponding targeted change in the state of any element of control systems.

  16. Marriage Matters: Spousal Similarity in Life Satisfaction

    OpenAIRE

    Ulrich Schimmack; Richard Lucas

    2006-01-01

    Examined the concurrent and cross-lagged spousal similarity in life satisfaction over a 21-year period. Analyses were based on married couples (N = 847) in the German Socio-Economic Panel (SOEP). Concurrent spousal similarity was considerably higher than one-year retest similarity, revealing spousal similarity in the variable component of life satisfac-tion. Spousal similarity systematically decreased with length of retest interval, revealing simi-larity in the changing component of life sati...

  17. Environmental policy without costs? A review of the Porter hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Braennlund, Runar; Lundgren, Tommy. e-mail: runar.brannlund@econ.umu.se

    2009-03-15

    This paper reviews the theoretical and empirical literature connected to the so called Porter Hypothesis. That is, to review the literature connected to the discussion about the relation between environmental policy and competitiveness. According to the conventional wisdom environmental policy, aiming for improving the environment through for example emission reductions, do imply costs since scarce resources must be diverted from somewhere else. However, this conventional wisdom has been challenged and questioned recently through what has been denoted the 'Porter hypothesis'. Those in the forefront of the Porter hypothesis challenge the conventional wisdom basically on the ground that resources are used inefficiently in the absence of the right kind of environmental regulations, and that the conventional neo-classical view is too static to take inefficiencies into account. The conclusions that can be made from this review is (1) that the theoretical literature can identify the circumstances and mechanisms that must exist for a Porter effect to occur, (2) that these circumstances are rather non-general, hence rejecting the Porter hypothesis in general, (3) that the empirical literature give no general support for the Porter hypothesis. Furthermore, a closer look at the 'Swedish case' reveals no support for the Porter hypothesis in spite of the fact that Swedish environmental policy the last 15-20 years seems to be in line the prerequisites stated by the Porter hypothesis concerning environmental policy

  18. The linear hypothesis - an idea whose time has passed

    International Nuclear Information System (INIS)

    Tschaeche, A.N.

    1995-01-01

    The linear no-threshold hypothesis is the basis for radiation protection standards in the United States. In the words of the National Council on Radiation Protection and Measurements (NCRP), the hypothesis is: open-quotes In the interest of estimating effects in humans conservatively, it is not unreasonable to follow the assumption of a linear relationship between dose and effect in the low dose regions for which direct observational data are not available.close quotes The International Commission on Radiological Protection (ICRP) stated the hypothesis in a slightly different manner: open-quotes One such basic assumption ... is that ... there is ... a linear relationship without threshold between dose and the probability of an effect. The hypothesis was necessary 50 yr ago when it was first enunciated because the dose-effect curve for ionizing radiation for effects in humans was not known. The ICRP and NCRP needed a model to extrapolate high-dose effects to low-dose effects. So the linear no-threshold hypothesis was born. Certain details of the history of the development and use of the linear hypothesis are presented. In particular, use of the hypothesis by the U.S. regulatory agencies is examined. Over time, the sense of the hypothesis has been corrupted. The corruption of the hypothesis into the current paradigm of open-quote a little radiation, no matter how small, can and will harm youclose quotes is presented. The reasons the corruption occurred are proposed. The effects of the corruption are enumerated, specifically, the use of the corruption by the antinuclear forces in the United States and some of the huge costs to U.S. taxpayers due to the corruption. An alternative basis for radiation protection standards to assure public safety, based on the weight of scientific evidence on radiation health effects, is proposed

  19. Biostatistics series module 2: Overview of hypothesis testing

    Directory of Open Access Journals (Sweden)

    Avijit Hazra

    2016-01-01

    Full Text Available Hypothesis testing (or statistical inference is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric and the number of groups or data sets being compared (e.g., two or more than two at a time. The same research question may be explored by more than one type of hypothesis test

  20. Critiques of the seismic hypothesis and the vegetation stabilization hypothesis for the formation of Mima mounds along the western coast of the U.S.

    Science.gov (United States)

    Gabet, Emmanuel J.; Burnham, Jennifer L. Horwath; Perron, J. Taylor

    2016-09-01

    A recent paper published in Geomorphology by Gabet et al. (2014) presents the results of a numerical model supporting the hypothesis that burrowing mammals build Mima mounds - small, densely packed hillocks found primarily in the western United States. The model is based on field observations and produces realistic-looking mounds with spatial distributions similar to real moundfields. Alternative explanations have been proposed for these Mima mounds, including formation by seismic shaking and vegetation-controlled erosion and deposition. In this short communication, we present observations from moundfields in the coastal states of the western U.S. that are incompatible with these alternative theories.

  1. Null but not void: considerations for hypothesis testing.

    Science.gov (United States)

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  2. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect

    KAUST Repository

    Demetrius, Lloyd A.; Magistretti, Pierre J.; Pellerin, Luc

    2015-01-01

    Epidemiological and biochemical studies show that the sporadic forms of Alzheimer's disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events—mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  3. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian

    2008-01-01

    Glahn, C. (2008). Cross-system log file analysis for hypothesis testing. Presented at Empowering Learners for Lifelong Competence Development: pedagogical, organisational and technological issues. 4th TENCompetence Open Workshop. April, 10, 2008, Madrid, Spain.

  4. Hypothesis Testing Using the Films of the Three Stooges

    Science.gov (United States)

    Gardner, Robert; Davidson, Robert

    2010-01-01

    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.

  5. Incidence of allergy and atopic disorders and hygiene hypothesis.

    Czech Academy of Sciences Publication Activity Database

    Bencko, V.; Šíma, Petr

    2017-01-01

    Roč. 2, 6 March (2017), č. článku 1244. ISSN 2474-1663 Institutional support: RVO:61388971 Keywords : allergy disorders * atopic disorders * hygiene hypothesis Subject RIV: EE - Microbiology, Virology OBOR OECD: Microbiology

  6. Alzheimer's disease: the amyloid hypothesis and the Inverse Warburg effect

    KAUST Repository

    Demetrius, Lloyd A.

    2015-01-14

    Epidemiological and biochemical studies show that the sporadic forms of Alzheimer\\'s disease (AD) are characterized by the following hallmarks: (a) An exponential increase with age; (b) Selective neuronal vulnerability; (c) Inverse cancer comorbidity. The present article appeals to these hallmarks to evaluate and contrast two competing models of AD: the amyloid hypothesis (a neuron-centric mechanism) and the Inverse Warburg hypothesis (a neuron-astrocytic mechanism). We show that these three hallmarks of AD conflict with the amyloid hypothesis, but are consistent with the Inverse Warburg hypothesis, a bioenergetic model which postulates that AD is the result of a cascade of three events—mitochondrial dysregulation, metabolic reprogramming (the Inverse Warburg effect), and natural selection. We also provide an explanation for the failures of the clinical trials based on amyloid immunization, and we propose a new class of therapeutic strategies consistent with the neuroenergetic selection model.

  7. The Double-Deficit Hypothesis in Spanish Developmental Dyslexia

    Science.gov (United States)

    Jimenez, Juan E.; Hernandez-Valle, Isabel; Rodriguez, Cristina; Guzman, Remedios; Diaz, Alicia; Ortiz, Rosario

    2008-01-01

    The double-deficit hypothesis (DDH) of developmental dyslexia was investigated in seven to twelve year old Spanish children. It was observed that the double deficit (DD) group had the greatest difficulty with reading.

  8. On different forms of self similarity

    International Nuclear Information System (INIS)

    Aswathy, R.K.; Mathew, Sunil

    2016-01-01

    Fractal geometry is mainly based on the idea of self-similar forms. To be self-similar, a shape must able to be divided into parts that are smaller copies, which are more or less similar to the whole. There are different forms of self similarity in nature and mathematics. In this paper, some of the topological properties of super self similar sets are discussed. It is proved that in a complete metric space with two or more elements, the set of all non super self similar sets are dense in the set of all non-empty compact sub sets. It is also proved that the product of self similar sets are super self similar in product metric spaces and that the super self similarity is preserved under isometry. A characterization of super self similar sets using contracting sub self similarity is also presented. Some relevant counterexamples are provided. The concepts of exact super and sub self similarity are introduced and a necessary and sufficient condition for a set to be exact super self similar in terms of condensation iterated function systems (Condensation IFS’s) is obtained. A method to generate exact sub self similar sets using condensation IFS’s and the denseness of exact super self similar sets are also discussed.

  9. Reasoning heuristics across the psychosis continuum: the contribution of hypersalient evidence-hypothesis matches.

    Science.gov (United States)

    Balzan, Ryan; Delfabbro, Paul; Galletly, Cherrie; Woodward, Todd

    2012-01-01

    Hypersalience of evidence-hypothesis matches has recently been proposed as the cognitive mechanism responsible for the cognitive biases which, in turn, may contribute to the formation and maintenance of delusions. However, the construct lacks empirical support. The current paper investigates the possibility that individuals with delusions are hypersalient to evidence-hypothesis matches using a series of cognitive tasks designed to elicit the representativeness and availability reasoning heuristics. It was hypothesised that hypersalience of evidence-hypothesis matches may increase a person's propensity to rely on judgements of representativeness (i.e., when the probability of an outcome is based on its similarity with its parent population) and availability (i.e., estimates of frequency based on the ease with which relevant events come to mind). A total of 75 participants (25 diagnosed with schizophrenia with a history of delusions; 25 nonclinical delusion-prone; 25 nondelusion-prone controls) completed four heuristics tasks based on the original Tversky and Kahnemann experiments. These included two representativeness tasks ("coin-toss" random sequence task; "lawyer-engineer" base-rates task) and two availability tasks ("famous-names" and "letter-frequency" tasks). The results across these four heuristics tasks showed that participants with schizophrenia were more susceptible than nonclinical groups to both the representativeness and availability reasoning heuristics. These results suggest that delusional ideation is linked to a hypersalience of evidence-hypothesis matches. The theoretical implications of this cognitive mechanism on the formation and maintenance of delusions are discussed.

  10. The Random-Walk Hypothesis on the Indian Stock Market

    OpenAIRE

    Ankita Mishra; Vinod Mishra; Russell Smyth

    2014-01-01

    This study tests the random walk hypothesis for the Indian stock market. Using 19 years of monthly data on six indices from the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE), this study applies three different unit root tests with two structural breaks to analyse the random walk hypothesis. We find that unit root tests that allow for two structural breaks alone are not able to reject the unit root null; however, a recently developed unit root test that simultaneously accou...

  11. The Fractal Market Hypothesis: Applications to Financial Forecasting

    OpenAIRE

    Blackledge, Jonathan

    2010-01-01

    Most financial modelling systems rely on an underlying hypothesis known as the Efficient Market Hypothesis (EMH) including the famous Black-Scholes formula for placing an option. However, the EMH has a fundamental flaw: it is based on the assumption that economic processes are normally distributed and it has long been known that this is not the case. This fundamental assumption leads to a number of shortcomings associated with using the EMH to analyse financial data which includes failure to ...

  12. Dopamine and Reward: The Anhedonia Hypothesis 30 years on

    OpenAIRE

    Wise, Roy A.

    2008-01-01

    The anhedonia hypothesis – that brain dopamine plays a critical role in the subjective pleasure associated with positive rewards – was intended to draw the attention of psychiatrists to the growing evidence that dopamine plays a critical role in the objective reinforcement and incentive motivation associated with food and water, brain stimulation reward, and psychomotor stimulant and opiate reward. The hypothesis called to attention the apparent paradox that neuroleptics, drugs used to treat ...

  13. Testing the null hypothesis: the forgotten legacy of Karl Popper?

    Science.gov (United States)

    Wilkinson, Mick

    2013-01-01

    Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.

  14. Neural pattern similarity underlies the mnemonic advantages for living words.

    Science.gov (United States)

    Xiao, Xiaoqian; Dong, Qi; Chen, Chuansheng; Xue, Gui

    2016-06-01

    It has been consistently shown that words representing living things are better remembered than words representing nonliving things, yet the underlying cognitive and neural mechanisms have not been clearly elucidated. The present study used both univariate and multivariate pattern analyses to examine the hypotheses that living words are better remembered because (1) they draw more attention and/or (2) they share more overlapping semantic features. Subjects were asked to study a list of living and nonliving words during a semantic judgment task. An unexpected recognition test was administered 30 min later. We found that subjects recognized significantly more living words than nonliving words. Results supported the overlapping semantic feature hypothesis by showing that (a) semantic ratings showed greater semantic similarity for living words than for nonliving words, (b) there was also significantly greater neural global pattern similarity (nGPS) for living words than for nonliving words in the posterior portion of left parahippocampus (LpPHG), (c) the nGPS in the LpPHG reflected the rated semantic similarity, and also mediated the memory differences between two semantic categories, and (d) greater univariate activation was found for living words than for nonliving words in the left hippocampus (LHIP), which mediated the better memory performance for living words and might reflect greater semantic context binding. In contrast, although living words were processed faster and elicited a stronger activity in the dorsal attention network, these differences did not mediate the animacy effect in memory. Taken together, our results provide strong support to the overlapping semantic features hypothesis, and emphasize the important role of semantic organization in episodic memory encoding. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Large margin classification with indefinite similarities

    KAUST Repository

    Alabdulmohsin, Ibrahim; Cisse, Moustapha; Gao, Xin; Zhang, Xiangliang

    2016-01-01

    Classification with indefinite similarities has attracted attention in the machine learning community. This is partly due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite, i.e. the Mercer

  16. Personality similarity and life satisfaction in couples

    OpenAIRE

    Furler Katrin; Gomez Veronica; Grob Alexander

    2013-01-01

    The present study examined the association between personality similarity and life satisfaction in a large nationally representative sample of 1608 romantic couples. Similarity effects were computed for the Big Five personality traits as well as for personality profiles with global and differentiated indices of similarity. Results showed substantial actor and partner effects indicating that both partners' personality traits were related to both partners' life satisfaction. Personality similar...

  17. Increased sex ratio in Russia and Cuba after Chernobyl: a radiological hypothesis

    Science.gov (United States)

    2013-01-01

    Background The ratio of male to female offspring at birth may be a simple and non-invasive way to monitor the reproductive health of a population. Except in societies where selective abortion skews the sex ratio, approximately 105 boys are born for every 100 girls. Generally, the human sex ratio at birth is remarkably constant in large populations. After the Chernobyl nuclear power plant accident in April 1986, a long lasting significant elevation in the sex ratio has been found in Russia, i.e. more boys or fewer girls compared to expectation were born. Recently, also for Cuba an escalated sex ratio from 1987 onward has been documented and discussed in the scientific literature. Presentation of the hypothesis By the end of the eighties of the last century in Cuba as much as about 60% of the food imports were provided by the former Soviet Union. Due to its difficult economic situation, Cuba had neither the necessary insight nor the political strength to circumvent the detrimental genetic effects of imported radioactively contaminated foodstuffs after Chernobyl. We propose that the long term stable sex ratio increase in Cuba is essentially due to ionizing radiation. Testing of the hypothesis A synoptic trend analysis of Russian and Cuban annual sex ratios discloses upward jumps in 1987. The estimated jump height from 1986 to 1987 in Russia measures 0.51% with a 95% confidence interval (0.28, 0.75), p value < 0.0001. In Cuba the estimated jump height measures 2.99% (2.39, 3.60), p value < 0.0001. The hypothesis may be tested by reconstruction of imports from the world markets to Cuba and by radiological analyses of remains in Cuba for Cs-137 and Sr-90. Implications of the hypothesis If the evidence for the hypothesis is strengthened, there is potential to learn about genetic radiation risks and to prevent similar effects in present and future exposure situations. PMID:23947741

  18. A new hypothesis on the nature of quark and gluon confinement

    International Nuclear Information System (INIS)

    Gribov, V.N.

    1986-06-01

    A new hypothesis of quark confinement is proposed: formation of superbound states in light quark vacuum analogical to the electron states in the field of nuclei about Z 180. The vacuum polarization effect of a heavy antiquark produces a light quark-antiquark pair. If the effective charge of the central particle is greater than a critical value, the new antiquark goes to the infinity and the bounded quark screenes the color field of the central charge. Thus, the colored states are unstable and vacuum polarization produces colorless bound states. A similar effect of gluon instability can explain the structure of mesons with light quarks and antiquarks: the central self-bounded unstable gluon state polarizes and binds a light quark-antiquark pair. Consequences of the new hypothesis on the measurability of color charge, on the structure of baryons and on the possible experimental verification of the existence of the proposed structures are discussed. (D.Gy.)

  19. Mind as a force field: comments on a new interactionistic hypothesis.

    Science.gov (United States)

    Lindahl, B I; Arhem, P

    1994-11-07

    The survival and development of consciousness in biological evolution call for an explanation. An interactionistic mind-brain theory seems to have the greatest explanatory value in this context. An interpretation of an interactionistic hypothesis, recently proposed by Karl Popper, is discussed both theoretically and based on recent experimental data. In the interpretation, the distinction between the conscious mind and the brain is seen as a division into what is subjective and what is objective, and not as an ontological distinction between something immaterial and something material. The interactionistic hypothesis is based on similarities between minds and physical forces. The conscious mind is understood to interact with randomly spontaneous spatio-temporal patterns of action potentials through an electromagnetic field. Consequences and suggestions for future studies are discussed.

  20. Neuroticism, intelligence, and intra-individual variability in elementary cognitive tasks: testing the mental noise hypothesis.

    Science.gov (United States)

    Colom, Roberto; Quiroga, Ma Angeles

    2009-08-01

    Some studies show positive correlations between intraindividual variability in elementary speed measures (reflecting processing efficiency) and individual differences in neuroticism (reflecting instability in behaviour). The so-called neural noise hypothesis assumes that higher levels of noise are related both to smaller indices of processing efficiency and greater levels of neuroticism. Here, we test this hypothesis measuring mental speed by means of three elementary cognitive tasks tapping similar basic processes but varying systematically their content (verbal, numerical, and spatial). Neuroticism and intelligence are also measured. The sample comprised 196 undergraduate psychology students. The results show that (1) processing efficiency is generally unrelated to individual differences in neuroticism, (2) processing speed and efficiency correlate with intelligence, and (3) only the efficiency index is genuinely related to intelligence when the colinearity between speed and efficiency is controlled.

  1. A large scale test of the gaming-enhancement hypothesis

    Directory of Open Access Journals (Sweden)

    Andrew K. Przybylski

    2016-11-01

    Full Text Available A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  2. A large scale test of the gaming-enhancement hypothesis.

    Science.gov (United States)

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  3. Investigating the environmental Kuznets curve hypothesis in Vietnam

    International Nuclear Information System (INIS)

    Al-Mulali, Usama; Saboori, Behnaz; Ozturk, Ilhan

    2015-01-01

    This study investigates the existence of the environmental Kuznets curve (EKC) hypothesis in Vietnam during the period 1981–2011. To realize the goals of this study, a pollution model was established applying the Autoregressive Distributed Lag (ARDL) methodology. The results revealed that the pollution haven hypothesis does exist in Vietnam because capital increases pollution. In addition, imports also increase pollution which indicates that most of Vietnam's imported products are energy intensive and highly polluted. However, exports have no effect on pollution which indicates that the level of exports is not significant enough to affect pollution. Moreover, fossil fuel energy consumption increases pollution while renewable energy consumption has no significant effect in reducing pollution. Furthermore, labor force reduces pollution since most of Vietnam's labor force is in the agricultural and services sectors which are less energy intensive than the industrial sector. Based on the obtained results, the EKC hypothesis does not exist because the relationship between GDP and pollution is positive in both the short and long run. - Highlights: • The environmental Kuznets curve (EKC) hypothesis in Vietnam is investigated. • The Autoregressive Distributed Lag (ARDL) methodology was utilized. • The EKC hypothesis does not exist

  4. Social learning and evolution: the cultural intelligence hypothesis

    Science.gov (United States)

    van Schaik, Carel P.; Burkart, Judith M.

    2011-01-01

    If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer. PMID:21357223

  5. Dopamine and reward: the anhedonia hypothesis 30 years on.

    Science.gov (United States)

    Wise, Roy A

    2008-10-01

    The anhedonia hypothesis--that brain dopamine plays a critical role in the subjective pleasure associated with positive rewards--was intended to draw the attention of psychiatrists to the growing evidence that dopamine plays a critical role in the objective reinforcement and incentive motivation associated with food and water, brain stimulation reward, and psychomotor stimulant and opiate reward. The hypothesis called to attention the apparent paradox that neuroleptics, drugs used to treat a condition involving anhedonia (schizophrenia), attenuated in laboratory animals the positive reinforcement that we normally associate with pleasure. The hypothesis held only brief interest for psychiatrists, who pointed out that the animal studies reflected acute actions of neuroleptics whereas the treatment of schizophrenia appears to result from neuroadaptations to chronic neuroleptic administration, and that it is the positive symptoms of schizophrenia that neuroleptics alleviate, rather than the negative symptoms that include anhedonia. Perhaps for these reasons, the hypothesis has had minimal impact in the psychiatric literature. Despite its limited heuristic value for the understanding of schizophrenia, however, the anhedonia hypothesis has had major impact on biological theories of reinforcement, motivation, and addiction. Brain dopamine plays a very important role in reinforcement of response habits, conditioned preferences, and synaptic plasticity in cellular models of learning and memory. The notion that dopamine plays a dominant role in reinforcement is fundamental to the psychomotor stimulant theory of addiction, to most neuroadaptation theories of addiction, and to current theories of conditioned reinforcement and reward prediction. Properly understood, it is also fundamental to recent theories of incentive motivation.

  6. Social learning and evolution: the cultural intelligence hypothesis.

    Science.gov (United States)

    van Schaik, Carel P; Burkart, Judith M

    2011-04-12

    If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer.

  7. A comparator-hypothesis account of biased contingency detection.

    Science.gov (United States)

    Vadillo, Miguel A; Barberia, Itxaso

    2018-02-12

    Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  9. Universality hypothesis breakdown at one-loop order

    Science.gov (United States)

    Carvalho, P. R. S.

    2018-05-01

    We probe the universality hypothesis by analytically computing the at least two-loop corrections to the critical exponents for q -deformed O (N ) self-interacting λ ϕ4 scalar field theories through six distinct and independent field-theoretic renormalization group methods and ɛ -expansion techniques. We show that the effect of q deformation on the one-loop corrections to the q -deformed critical exponents is null, so the universality hypothesis is broken down at this loop order. Such an effect emerges only at the two-loop and higher levels, and the validity of the universality hypothesis is restored. The q -deformed critical exponents obtained through the six methods are the same and, furthermore, reduce to their nondeformed values in the appropriated limit.

  10. Unicorns do exist: a tutorial on "proving" the null hypothesis.

    Science.gov (United States)

    Streiner, David L

    2003-12-01

    Introductory statistics classes teach us that we can never prove the null hypothesis; all we can do is reject or fail to reject it. However, there are times when it is necessary to try to prove the nonexistence of a difference between groups. This most often happens within the context of comparing a new treatment against an established one and showing that the new intervention is not inferior to the standard. This article first outlines the logic of "noninferiority" testing by differentiating between the null hypothesis (that which we are trying to nullify) and the "nill" hypothesis (there is no difference), reversing the role of the null and alternate hypotheses, and defining an interval within which groups are said to be equivalent. We then work through an example and show how to calculate sample sizes for noninferiority studies.

  11. Almost-Quantum Correlations Violate the No-Restriction Hypothesis.

    Science.gov (United States)

    Sainz, Ana Belén; Guryanova, Yelena; Acín, Antonio; Navascués, Miguel

    2018-05-18

    To identify which principles characterize quantum correlations, it is essential to understand in which sense this set of correlations differs from that of almost-quantum correlations. We solve this problem by invoking the so-called no-restriction hypothesis, an explicit and natural axiom in many reconstructions of quantum theory stating that the set of possible measurements is the dual of the set of states. We prove that, contrary to quantum correlations, no generalized probabilistic theory satisfying the no-restriction hypothesis is able to reproduce the set of almost-quantum correlations. Therefore, any theory whose correlations are exactly, or very close to, the almost-quantum correlations necessarily requires a rule limiting the possible measurements. Our results suggest that the no-restriction hypothesis may play a fundamental role in singling out the set of quantum correlations among other nonsignaling ones.

  12. Motor synergies and the equilibrium-point hypothesis.

    Science.gov (United States)

    Latash, Mark L

    2010-07-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.

  13. On Using Taylor's Hypothesis for Three-Dimensional Mixing Layers

    Science.gov (United States)

    LeBoeuf, Richard L.; Mehta, Rabindra D.

    1995-01-01

    In the present study, errors in using Taylor's hypothesis to transform measurements obtained in a temporal (or phase) frame onto a spatial one were evaluated. For the first time, phase-averaged ('real') spanwise and streamwise vorticity data measured on a three-dimensional grid were compared directly to those obtained using Taylor's hypothesis. The results show that even the qualitative features of the spanwise and streamwise vorticity distributions given by the two techniques can be very different. This is particularly true in the region of the spanwise roller pairing. The phase-averaged spanwise and streamwise peak vorticity levels given by Taylor's hypothesis are typically lower (by up to 40%) compared to the real measurements.

  14. The frequentist implications of optional stopping on Bayesian hypothesis tests.

    Science.gov (United States)

    Sanborn, Adam N; Hills, Thomas T

    2014-04-01

    Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.

  15. Similarity increases altruistic punishment in humans.

    Science.gov (United States)

    Mussweiler, Thomas; Ockenfels, Axel

    2013-11-26

    Humans are attracted to similar others. As a consequence, social networks are homogeneous in sociodemographic, intrapersonal, and other characteristics--a principle called homophily. Despite abundant evidence showing the importance of interpersonal similarity and homophily for human relationships, their behavioral correlates and cognitive foundations are poorly understood. Here, we show that perceived similarity substantially increases altruistic punishment, a key mechanism underlying human cooperation. We induced (dis)similarity perception by manipulating basic cognitive mechanisms in an economic cooperation game that included a punishment phase. We found that similarity-focused participants were more willing to punish others' uncooperative behavior. This influence of similarity is not explained by group identity, which has the opposite effect on altruistic punishment. Our findings demonstrate that pure similarity promotes reciprocity in ways known to encourage cooperation. At the same time, the increased willingness to punish norm violations among similarity-focused participants provides a rationale for why similar people are more likely to build stable social relationships. Finally, our findings show that altruistic punishment is differentially involved in encouraging cooperation under pure similarity vs. in-group conditions.

  16. Thermalization without eigenstate thermalization hypothesis after a quantum quench.

    Science.gov (United States)

    Mori, Takashi; Shiraishi, Naoto

    2017-08-01

    Nonequilibrium dynamics of a nonintegrable system without the eigenstate thermalization hypothesis is studied. It is shown that, in the thermodynamic limit, this model thermalizes after an arbitrary quantum quench at finite temperature, although it does not satisfy the eigenstate thermalization hypothesis. In contrast, when the system size is finite and the temperature is low enough, the system may not thermalize. In this case, the steady state is well described by the generalized Gibbs ensemble constructed by using highly nonlocal conserved quantities. We also show that this model exhibits prethermalization, in which the prethermalized state is characterized by nonthermal energy eigenstates.

  17. Tunguska, 1908: the gas pouch and soil fluidization hypothesis

    Science.gov (United States)

    Nistor, I.

    2012-01-01

    The Siberian taiga explosion of 30 June 1908 remains one of the great mysteries of the 20th century: millions of trees put down over an area of 2200 km2 without trace of a crater or meteorite fragments. Hundred years of failed searches have followed, resulting in as many flawed hypothesis which could not offer satisfactory explanations: meteorite, comet, UFO, etc. In the author's opinion, the cause is that the energy the explorers looked for was simply not there! The author's hypothesis is that a meteoroid encountered a gas pouch in the atmosphere, producing a devastating explosion, its effects being amplified by soil fluidization.

  18. The equilibrium-point hypothesis--past, present and future.

    Science.gov (United States)

    Feldman, Anatol G; Levin, Mindy F

    2009-01-01

    This chapter is a brief account of fundamentals of the equilibrium-point hypothesis or more adequately called the threshold control theory (TCT). It also compares the TCT with other approaches to motor control. The basic notions of the TCT are reviewed with a major focus on solutions to the problems of multi-muscle and multi-degrees of freedom redundancy. The TCT incorporates cognitive aspects by explaining how neurons recognize that internal (neural) and external (environmental) events match each other. These aspects as well as how motor learning occurs are subjects of further development of the TCT hypothesis.

  19. The cosmic censorship hypothesis and the positive energy conjecture

    International Nuclear Information System (INIS)

    Jang, P.S.; Wald, R.W.

    1979-01-01

    The position so far is summarized. Penrose derived an inequality; if a data set was found to violate this then the assumptions deriving the inequality must be false. In this case it could show a counter example to the cosmic censorship hypothesis. The authors have shown elsewhere that a positive energy argument of Geroch can be modified to rule out a violation of Penrose's inequality with any time-symmetric initial data set whose apparent horizon consists of a single component. This increases confidence in the hypothesis and also indicates there may be a close relationship between this conjecture and the positive energy conjecture. (UK)

  20. Eat dirt and avoid atopy: the hygiene hypothesis revisited.

    Science.gov (United States)

    Patki, Anil

    2007-01-01

    The explosive rise in the incidence of atopic diseases in the Western developed countries can be explained on the basis of the so-called "hygiene hypothesis". In short, it attributes the rising incidence of atopic dermatitis to reduced exposure to various childhood infections and bacterial endotoxins. Reduced exposure to dirt in the clean environment results in a skewed development of the immune system which results in an abnormal allergic response to various environmental allergens which are otherwise innocuous. This article reviews the historical aspects, epidemiological and immunological basis of the hygiene hypothesis and implications for Indian conditions.

  1. Non Kolmogorov Probability Models Outside Quantum Mechanics

    Science.gov (United States)

    Accardi, Luigi

    2009-03-01

    This paper is devoted to analysis of main conceptual problems in the interpretation of QM: reality, locality, determinism, physical state, Heisenberg principle, "deterministic" and "exact" theories, laws of chance, notion of event, statistical invariants, adaptive realism, EPR correlations and, finally, the EPR-chameleon experiment.

  2. Morphological similarity and ecological overlap in two rotifer species.

    Science.gov (United States)

    Gabaldón, Carmen; Montero-Pau, Javier; Serra, Manuel; Carmona, María José

    2013-01-01

    Co-occurrence of cryptic species raises theoretically relevant questions regarding their coexistence and ecological similarity. Given their great morphological similitude and close phylogenetic relationship (i.e., niche retention), these species will have similar ecological requirements and are expected to have strong competitive interactions. This raises the problem of finding the mechanisms that may explain the coexistence of cryptic species and challenges the conventional view of coexistence based on niche differentiation. The cryptic species complex of the rotifer Brachionus plicatilis is an excellent model to study these questions and to test hypotheses regarding ecological differentiation. Rotifer species within this complex are filtering zooplankters commonly found inhabiting the same ponds across the Iberian Peninsula and exhibit an extremely similar morphology-some of them being even virtually identical. Here, we explore whether subtle differences in body size and morphology translate into ecological differentiation by comparing two extremely morphologically similar species belonging to this complex: B. plicatilis and B. manjavacas. We focus on three key ecological features related to body size: (1) functional response, expressed by clearance rates; (2) tolerance to starvation, measured by growth and reproduction; and (3) vulnerability to copepod predation, measured by the number of preyed upon neonates. No major differences between B. plicatilis and B. manjavacas were found in the response to these features. Our results demonstrate the existence of a substantial niche overlap, suggesting that the subtle size differences between these two cryptic species are not sufficient to explain their coexistence. This lack of evidence for ecological differentiation in the studied biotic niche features is in agreement with the phylogenetic limiting similarity hypothesis but requires a mechanistic explanation of the coexistence of these species not based on

  3. Morphological similarity and ecological overlap in two rotifer species.

    Directory of Open Access Journals (Sweden)

    Carmen Gabaldón

    Full Text Available Co-occurrence of cryptic species raises theoretically relevant questions regarding their coexistence and ecological similarity. Given their great morphological similitude and close phylogenetic relationship (i.e., niche retention, these species will have similar ecological requirements and are expected to have strong competitive interactions. This raises the problem of finding the mechanisms that may explain the coexistence of cryptic species and challenges the conventional view of coexistence based on niche differentiation. The cryptic species complex of the rotifer Brachionus plicatilis is an excellent model to study these questions and to test hypotheses regarding ecological differentiation. Rotifer species within this complex are filtering zooplankters commonly found inhabiting the same ponds across the Iberian Peninsula and exhibit an extremely similar morphology-some of them being even virtually identical. Here, we explore whether subtle differences in body size and morphology translate into ecological differentiation by comparing two extremely morphologically similar species belonging to this complex: B. plicatilis and B. manjavacas. We focus on three key ecological features related to body size: (1 functional response, expressed by clearance rates; (2 tolerance to starvation, measured by growth and reproduction; and (3 vulnerability to copepod predation, measured by the number of preyed upon neonates. No major differences between B. plicatilis and B. manjavacas were found in the response to these features. Our results demonstrate the existence of a substantial niche overlap, suggesting that the subtle size differences between these two cryptic species are not sufficient to explain their coexistence. This lack of evidence for ecological differentiation in the studied biotic niche features is in agreement with the phylogenetic limiting similarity hypothesis but requires a mechanistic explanation of the coexistence of these species not

  4. Notions of similarity for systems biology models.

    Science.gov (United States)

    Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knüpfer, Christian; Liebermeister, Wolfram; Waltemath, Dagmar

    2018-01-01

    Systems biology models are rapidly increasing in complexity, size and numbers. When building large models, researchers rely on software tools for the retrieval, comparison, combination and merging of models, as well as for version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of 'similarity' may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here we survey existing methods for the comparison of models, introduce quantitative measures for model similarity, and discuss potential applications of combined similarity measures. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on a combination of different model aspects. The six aspects that we define as potentially relevant for similarity are underlying encoding, references to biological entities, quantitative behaviour, qualitative behaviour, mathematical equations and parameters and network structure. We argue that future similarity measures will benefit from combining these model aspects in flexible, problem-specific ways to mimic users' intuition about model similarity, and to support complex model searches in databases. © The Author 2016. Published by Oxford University Press.

  5. Similar speaker recognition using nonlinear analysis

    International Nuclear Information System (INIS)

    Seo, J.P.; Kim, M.S.; Baek, I.C.; Kwon, Y.H.; Lee, K.S.; Chang, S.W.; Yang, S.I.

    2004-01-01

    Speech features of the conventional speaker identification system, are usually obtained by linear methods in spectral space. However, these methods have the drawback that speakers with similar voices cannot be distinguished, because the characteristics of their voices are also similar in spectral space. To overcome the difficulty in linear methods, we propose to use the correlation exponent in the nonlinear space as a new feature vector for speaker identification among persons with similar voices. We show that our proposed method surprisingly reduces the error rate of speaker identification system to speakers with similar voices

  6. New Hypothesis and Theory about Functions of Sleep and Dreams

    Directory of Open Access Journals (Sweden)

    Nikola N. Ilanković

    2014-03-01

    Conclusion: IEP-P1 could be a new biological marker to distinction of sleep organization in different psychotic states and other states of altered consciousness. The developed statistical models could be the basis for new hypothesis and theories about functions of sleep and dreams.

  7. A test of the reward-value hypothesis.

    Science.gov (United States)

    Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D

    2017-03-01

    Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.

  8. Revisiting Hudson’s (1992) OO = O2 hypothesis

    DEFF Research Database (Denmark)

    Shibuya, Yoshikata; Jensen, Kim Ebensgaard

    2018-01-01

    In an important paper on the English “double-object”, or ditransitive, construction, Richard Hudson proposes a hypothesis that conflates the ditransitive direct object, or O2, and the monotransitive direct object, or OO, into the same syntactic functional category. While making important departures...

  9. Hypothesis, Prediction, and Conclusion: Using Nature of Science Terminology Correctly

    Science.gov (United States)

    Eastwell, Peter

    2012-01-01

    This paper defines the terms "hypothesis," "prediction," and "conclusion" and shows how to use the terms correctly in scientific investigations in both the school and science education research contexts. The scientific method, or hypothetico-deductive (HD) approach, is described and it is argued that an understanding of the scientific method,…

  10. A sequential hypothesis test based on a generalized Azuma inequality

    NARCIS (Netherlands)

    Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

    We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

  11. Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment

    Science.gov (United States)

    Frane, Andrew V.

    2015-01-01

    Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…

  12. A default Bayesian hypothesis test for correlations and partial correlations

    NARCIS (Netherlands)

    Wetzels, R.; Wagenmakers, E.J.

    2012-01-01

    We propose a default Bayesian hypothesis test for the presence of a correlation or a partial correlation. The test is a direct application of Bayesian techniques for variable selection in regression models. The test is easy to apply and yields practical advantages that the standard frequentist tests

  13. Semiparametric Power Envelopes for Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael

    This paper derives asymptotic power envelopes for tests of the unit root hypothesis in a zero-mean AR(1) model. The power envelopes are derived using the limits of experiments approach and are semiparametric in the sense that the underlying error distribution is treated as an unknown...

  14. Using Employer Hiring Behavior to Test the Educational Signaling Hypothesis

    NARCIS (Netherlands)

    Albrecht, J.W.; van Ours, J.C.

    2001-01-01

    This paper presents a test of the educational signaling hypothesis.If employers use education as a signal in the hiring process, they will rely more on education when less is otherwise known about applicants.We nd that employers are more likely to lower educational standards when an informal, more

  15. Explorations in Statistics: Hypothesis Tests and P Values

    Science.gov (United States)

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  16. THE BARKER HYPOTHESIS: IMPLICATIONS FOR FUTURE DIRECTIONS IN TOXICOLOGY RESEARCH

    Science.gov (United States)

    This review covers the past year’s papers germane to the Barker hypothesis. While much of the literature has centered on maternal and developmental nutrition, new findings have emerged on the ability of toxic exposures during development to impact fetal/developmental programming....

  17. Tax Evasion, Information Reporting, and the Regressive Bias Hypothesis

    DEFF Research Database (Denmark)

    Boserup, Simon Halphen; Pinje, Jori Veng

    A robust prediction from the tax evasion literature is that optimal auditing induces a regressive bias in effective tax rates compared to statutory rates. If correct, this will have important distributional consequences. Nevertheless, the regressive bias hypothesis has never been tested empirically...

  18. Developmental Dyslexia: The Visual Attention Span Deficit Hypothesis

    Science.gov (United States)

    Bosse, Marie-Line; Tainturier, Marie Josephe; Valdois, Sylviane

    2007-01-01

    The visual attention (VA) span is defined as the amount of distinct visual elements which can be processed in parallel in a multi-element array. Both recent empirical data and theoretical accounts suggest that a VA span deficit might contribute to developmental dyslexia, independently of a phonological disorder. In this study, this hypothesis was…

  19. The "Discouraged-Business-Major" Hypothesis: Policy Implications

    Science.gov (United States)

    Marangos, John

    2012-01-01

    This paper uses a relatively large dataset of the stated academic major preferences of economics majors at a relatively large, not highly selective, public university in the USA to identify the "discouraged-business-majors" (DBMs). The DBM hypothesis addresses the phenomenon where students who are screened out of the business curriculum often…

  20. The Twin Deficits Hypothesis: An Empirical Analysis for Tanzania

    Directory of Open Access Journals (Sweden)

    Manamba Epaphra

    2017-09-01

    Full Text Available This paper examines the relationship between current account and government budget deficits in Tanzania. The paper tests the validity of the twin deficits hypothesis, using annual time series data for the 1966-2015 period. The paper is thought to be significant because the concept of the twin deficit hypothesis is fraught with controversy. Some researches support the hypothesis that there is a positive relationship between current account deficits and fiscal deficits in the economy while others do not. In this paper, the empirical tests fail to reject the twin deficits hypothesis, indicating that rising budget deficits put more strain on the current account deficits in Tanzania. Specifically, the Vector Error Correction Model results support the conventional theory of a positive relationship between fiscal and external balances, with a relatively high speed of adjustment toward the equilibrium position. This evidence is consistent with a small open economy. To address the problem that may result from this kind of relationship, appropriate policy variables for reducing budget deficits such as reduction in non-development expenditure, enhancement of domestic revenue collection and actively fight corruption and tax evasion should be adopted. The government should also target export oriented firms and encourage an import substitution industry by creating favorable business environments.

  1. Etiology of common childhood acute lymphoblastic leukemia: the adrenal hypothesis

    DEFF Research Database (Denmark)

    Schmiegelow, K.; Vestergaard, T.; Nielsen, S.M.

    2008-01-01

    The pattern of infections in the first years of life modulates our immune system, and a low incidence of infections has been linked to an increased risk of common childhood acute lymphoblastic leukemia (ALL). We here present a new interpretation of these observations--the adrenal hypothesis...

  2. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus; Schoonenboom, Judith; Sligte, Henk; Moghnieh, Ayman; Hernández-Leo, Davinia; Stefanov, Krassen; Lemmers, Ruud; Koper, Rob

    2008-01-01

    Glahn, C., Specht, M., Schoonenboom, J., Sligte, H., Moghnieh, A., Hernández-Leo, D. Stefanov, K., Lemmers, R., & Koper, R. (2008). Cross-system log file analysis for hypothesis testing. In H. Sligte & R. Koper (Eds.), Proceedings of the 4th TENCompetence Open Workshop. Empowering Learners for

  3. The catecholaminergic-cholinergic balance hypothesis of bipolar disorder revisited

    Science.gov (United States)

    van Enkhuizen, Jordy; Janowsky, David S; Olivier, Berend; Minassian, Arpi; Perry, William; Young, Jared W; Geyer, Mark A

    2014-01-01

    Bipolar disorder is a unique illness characterized by fluctuations between mood states of depression and mania. Originally, an adrenergic-cholinergic balance hypothesis was postulated to underlie these different affective states. In this review, we update this hypothesis with recent findings from human and animal studies, suggesting that a catecholaminergic-cholinergic hypothesis may be more relevant. Evidence from neuroimaging studies, neuropharmacological interventions, and genetic associations support the notion that increased cholinergic functioning underlies depression, whereas increased activations of the catecholamines (dopamine and norepinephrine) underlie mania. Elevated functional acetylcholine during depression may affect both muscarinic and nicotinic acetylcholine receptors in a compensatory fashion. Increased functional dopamine and norepinephrine during mania on the other hand may affect receptor expression and functioning of dopamine reuptake transporters. Despite increasing evidence supporting this hypothesis, a relationship between these two neurotransmitter systems that could explain cycling between states of depression and mania is missing. Future studies should focus on the influence of environmental stimuli and genetic susceptibilities that may affect the catecholaminergic-cholinergic balance underlying cycling between the affective states. Overall, observations from recent studies add important data to this revised balance theory of bipolar disorder, renewing interest in this field of research. PMID:25107282

  4. A "Projective" Test of the Golden Section Hypothesis.

    Science.gov (United States)

    Lee, Chris; Adams-Webber, Jack

    1987-01-01

    In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)

  5. The Lehman Sisters Hypothesis: an exploration of literature and bankers

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractAbstract This article tests the Lehman Sisters Hypothesis in two complementary, although incomplete ways. It reviews the diverse empirical literature in behavioral, experimental, and neuroeconomics as well as related fields of behavioral research. And it presents the findings from an

  6. The Lehman Sisters Hypothesis: an exploration of literature and bankers

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractThis article tests the Lehman Sisters Hypothesis in two complementary, although incomplete ways. It reviews the diverse empirical literature in behavioural, experimental, and neuroeconomics as well as related fields of behavioural research. And it presents the findings from an

  7. A Life-Course Perspective on the "Gateway Hypothesis"

    Science.gov (United States)

    Van Gundy, Karen; Rebellon, Cesar J.

    2010-01-01

    Drawing on stress and life-course perspectives and using panel data from 1,286 south Florida young adults, we assess three critical questions regarding the role of marijuana in the "gateway hypothesis." First, does teen marijuana use independently (causally) affect subsequent use of more dangerous substances? Second, if so, does that…

  8. Shaping Up the Practice of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  9. Rational Variability in Children's Causal Inferences: The Sampling Hypothesis

    Science.gov (United States)

    Denison, Stephanie; Bonawitz, Elizabeth; Gopnik, Alison; Griffiths, Thomas L.

    2013-01-01

    We present a proposal--"The Sampling Hypothesis"--suggesting that the variability in young children's responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with…

  10. Assessing Threat Detection Scenarios through Hypothesis Generation and Testing

    Science.gov (United States)

    2015-12-01

    Publications. Field, A. (2005). Discovering statistics using SPSS (2nd ed.). Thousand Oaks, CA: Sage Publications. Fisher, S. D., Gettys, C. F...therefore, subsequent F statistics are reported using the Huynh-Feldt correction (Greenhouse-Geisser Epsilon > .775). Experienced and inexperienced...change in hypothesis using experience and initial confidence as predictors. In the Dog Day scenario, the regression was not statistically

  11. Fecundity of trees and the colonization-competition hypothesis

    Science.gov (United States)

    James S. Clark; Shannon LaDeau; Ines Ibanez

    2004-01-01

    Colonization-competition trade-offs represent a stabilizing mechanism that is thought to maintain diversity of forest trees. If so, then early-successional species should benefit from high capacity to colonize new sites, and late-successional species should be good competitors. Tests of this hypothesis in forests have been precluded by an inability to estimate...

  12. Scalar fields and cosmic censorship hypothesis in general relativity

    International Nuclear Information System (INIS)

    Parnovs'kij, S.L.; Gajdamaka, O.Z.

    2004-01-01

    We discuss an influence of the presence of some nonstandard scalar fields in the vicinity of naked time-like singularity on the type and properties of this singularity. The main goal is to study the validity of the Penrose's Cosmic Censorship hypothesis in the General Relativity

  13. Hypothesis Sampling Systems among Preoperational and Concrete Operational Kindergarten Children

    Science.gov (United States)

    Gholson, Barry; And Others

    1976-01-01

    Preoperational and concrete operational kindergarten children received stimulus differentiation training, either with or without feedback, and then a series of discrimination learning problems in which a blank trial probe was used to detect a child's hypothesis after each feedback trial. Piagetian stage theory requires elaboration to account…

  14. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    Science.gov (United States)

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  15. Animal Models for Testing the DOHaD Hypothesis

    Science.gov (United States)

    Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...

  16. An investigation of the competitiveness hypothesis of the resource curse

    NARCIS (Netherlands)

    L.A. Serino (Leandro)

    2008-01-01

    textabstractIn this paper I investigate the competitiveness explanation of the resource curse: to what extent slow growth in primary producer countries is related to the properties of this pattern of trade specialization. To address this hypothesis that has not been adequately explored in the

  17. Regional heterogeneity in consumption due to current income shocks: New evidence from the Permanent Income Hypothesis

    DEFF Research Database (Denmark)

    Mitze, Timo

    In the light of new theoretical and empirical work on the Permanent Income Hypothesis we tackle earlier findings for German data, which reject its validity given a large fraction of liquidity constrained consumers. Starting from a standard short run approach we do not find evidence for excess...... borrow from the literature on Poolability tests and search for macro regional clusters with similar adjustment paths. The findings show that for the sample of West German states between 1970 and 2006 both for short and long run parameters the assumption of poolability of the data cannot be rejected...

  18. On self-similar Tolman models

    International Nuclear Information System (INIS)

    Maharaj, S.D.

    1988-01-01

    The self-similar spherically symmetric solutions of the Einstein field equation for the case of dust are identified. These form a subclass of the Tolman models. These self-similar models contain the solution recently presented by Chi [J. Math. Phys. 28, 1539 (1987)], thereby refuting the claim of having found a new solution to the Einstein field equations

  19. Mining Diagnostic Assessment Data for Concept Similarity

    Science.gov (United States)

    Madhyastha, Tara; Hunt, Earl

    2009-01-01

    This paper introduces a method for mining multiple-choice assessment data for similarity of the concepts represented by the multiple choice responses. The resulting similarity matrix can be used to visualize the distance between concepts in a lower-dimensional space. This gives an instructor a visualization of the relative difficulty of concepts…

  20. Similarity indices I: what do they measure

    International Nuclear Information System (INIS)

    Johnston, J.W.

    1976-11-01

    A method for estimating the effects of environmental effusions on ecosystems is described. The characteristics of 25 similarity indices used in studies of ecological communities were investigated. The type of data structure, to which these indices are frequently applied, was described as consisting of vectors of measurements on attributes (species) observed in a set of samples. A general similarity index was characterized as the result of a two-step process defined on a pair of vectors. In the first step an attribute similarity score is obtained for each attribute by comparing the attribute values observed in the pair of vectors. The result is a vector of attribute similarity scores. These are combined in the second step to arrive at the similarity index. The operation in the first step was characterized as a function, g, defined on pairs of attribute values. The second operation was characterized as a function, F, defined on the vector of attribute similarity scores from the first step. Usually, F was a simple sum or weighted sum of the attribute similarity scores. It is concluded that similarity indices should not be used as the test statistic to discriminate between two ecological communities

  1. Measuring transferring similarity via local information

    Science.gov (United States)

    Yin, Likang; Deng, Yong

    2018-05-01

    Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.

  2. On distributional assumptions and whitened cosine similarities

    DEFF Research Database (Denmark)

    Loog, Marco

    2008-01-01

    Recently, an interpretation of the whitened cosine similarity measure as a Bayes decision rule was proposed (C. Liu, "The Bayes Decision Rule Induced Similarity Measures,'' IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1086-1090, June 2007. This communication makes th...

  3. Self-Similar Traffic In Wireless Networks

    OpenAIRE

    Jerjomins, R.; Petersons, E.

    2005-01-01

    Many studies have shown that traffic in Ethernet and other wired networks is self-similar. This paper reveals that wireless network traffic is also self-similar and long-range dependant by analyzing big amount of data captured from the wireless router.

  4. Similarity Structure of Wave-Collapse

    DEFF Research Database (Denmark)

    Rypdal, Kristoffer; Juul Rasmussen, Jens; Thomsen, Kenneth

    1985-01-01

    Similarity transformations of the cubic Schrödinger equation (CSE) are investigated. The transformations are used to remove the explicit time variation in the CSE and reduce it to differential equations in the spatial variables only. Two different methods for similarity reduction are employed and...

  5. Similarity indices I: what do they measure.

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, J.W.

    1976-11-01

    A method for estimating the effects of environmental effusions on ecosystems is described. The characteristics of 25 similarity indices used in studies of ecological communities were investigated. The type of data structure, to which these indices are frequently applied, was described as consisting of vectors of measurements on attributes (species) observed in a set of samples. A general similarity index was characterized as the result of a two-step process defined on a pair of vectors. In the first step an attribute similarity score is obtained for each attribute by comparing the attribute values observed in the pair of vectors. The result is a vector of attribute similarity scores. These are combined in the second step to arrive at the similarity index. The operation in the first step was characterized as a function, g, defined on pairs of attribute values. The second operation was characterized as a function, F, defined on the vector of attribute similarity scores from the first step. Usually, F was a simple sum or weighted sum of the attribute similarity scores. It is concluded that similarity indices should not be used as the test statistic to discriminate between two ecological communities.

  6. The Effectiveness of the Comprehension Hypothesis: A Review on the Current Research on Incidental Vocabulary Acquisition

    Science.gov (United States)

    Ponniah, Joseph

    2011-01-01

    The Comprehension Hypothesis (CH) is the most powerful hypothesis in the field of Second Language Acquisition despite the presence of the rivals the skill-building hypothesis, the output hypothesis, and the interaction hypothesis. The competing hypotheses state that consciously learned linguistic knowledge is a necessary step for the development…

  7. Simultaneity modeling analysis of the environmental Kuznets curve hypothesis

    International Nuclear Information System (INIS)

    Ben Youssef, Adel; Hammoudeh, Shawkat; Omri, Anis

    2016-01-01

    The environmental Kuznets curve (EKC) hypothesis has been recognized in the environmental economics literature since the 1990's. Various statistical tests have been used on time series, cross section and panel data related to single and groups of countries to validate this hypothesis. In the literature, the validation has always been conducted by using a single equation. However, since both the environment and income variables are endogenous, the estimation of a single equation model when simultaneity exists produces inconsistent and biased estimates. Therefore, we formulate simultaneous two-equation models to investigate the EKC hypothesis for fifty-six countries, using annual panel data from 1990 to 2012, with the end year is determined by data availability for the panel. To make the panel data analysis more homogeneous, we investigate this issue for a three income-based panels (namely, high-, middle-, and low-income panels) given several explanatory variables. Our results indicate that there exists a bidirectional causality between economic growth and pollution emissions in the overall panels. We also find that the relationship is nonlinear and has an inverted U-shape for all the considered panels. Policy implications are provided. - Highlights: • We have given a new look for the validity of the EKC hypothesis. • We formulate two-simultaneous equation models to validate this hypothesis for fifty-six countries. • We find a bidirectional causality between economic growth and pollution emissions. • We also discover an inverted U-shaped between environmental degradation and economic growth. • This relationship varies at different stages of economic development.

  8. A critique of statistical hypothesis testing in clinical research

    Directory of Open Access Journals (Sweden)

    Somik Raha

    2011-01-01

    Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.

  9. Mechanisms of eyewitness suggestibility: tests of the explanatory role hypothesis.

    Science.gov (United States)

    Rindal, Eric J; Chrobak, Quin M; Zaragoza, Maria S; Weihing, Caitlin A

    2017-10-01

    In a recent paper, Chrobak and Zaragoza (Journal of Experimental Psychology: General, 142(3), 827-844, 2013) proposed the explanatory role hypothesis, which posits that the likelihood of developing false memories for post-event suggestions is a function of the explanatory function the suggestion serves. In support of this hypothesis, they provided evidence that participant-witnesses were especially likely to develop false memories for their forced fabrications when their fabrications helped to explain outcomes they had witnessed. In three experiments, we test the generality of the explanatory role hypothesis as a mechanism of eyewitness suggestibility by assessing whether this hypothesis can predict suggestibility errors in (a) situations where the post-event suggestions are provided by the experimenter (as opposed to fabricated by the participant), and (b) across a variety of memory measures and measures of recollective experience. In support of the explanatory role hypothesis, participants were more likely to subsequently freely report (E1) and recollect the suggestions as part of the witnessed event (E2, source test) when the post-event suggestion helped to provide a causal explanation for a witnessed outcome than when it did not serve this explanatory role. Participants were also less likely to recollect the suggestions as part of the witnessed event (on measures of subjective experience) when their explanatory strength had been reduced by the presence of an alternative explanation that could explain the same outcome (E3, source test + warning). Collectively, the results provide strong evidence that the search for explanatory coherence influences people's tendency to misremember witnessing events that were only suggested to them.

  10. A robust null hypothesis for the potential causes of megadrought in western North America

    Science.gov (United States)

    Ault, T.; St George, S.; Smerdon, J. E.; Coats, S.; Mankin, J. S.; Cruz, C. C.; Cook, B.; Stevenson, S.

    2017-12-01

    The western United States was affected by several megadroughts during the last 1200 years, most prominently during the Medieval Climate Anomaly (MCA: 800 to 1300 CE). A null hypothesis is developed to test the possibility that, given a sufficiently long period of time, these events are inevitable and occur purely as a consequence of internal climate variability. The null distribution of this hypothesis is populated by a linear inverse model (LIM) constructed from global sea-surface temperature anomalies and self-calibrated Palmer Drought Severity Index data for North America. Despite being trained only on seasonal data from the late 20th century, the LIM produces megadroughts that are comparable in their duration, spatial scale, and magnitude as the most severe events of the last 12 centuries. The null hypothesis therefore cannot be rejected with much confidence when considering these features of megadrought, meaning that similar events are possible today, even without any changes to boundary conditions. In contrast, the observed clustering of megadroughts in the MCA, as well as the change in mean hydroclimate between the MCA and the 1500-2000 period, are more likely to have been caused by either external forcing or by internal climate variability not well sampled during the latter half of the Twentieth Century. Finally, the results demonstrate the LIM is a viable tool for determining whether paleoclimate reconstructions events should be ascribed to external forcings, "out of sample" climate mechanisms, or if they are consistent with the variability observed during the recent period.

  11. Sex, Sport, IGF-1 and the Community Effect in Height Hypothesis

    Directory of Open Access Journals (Sweden)

    Barry Bogin

    2015-05-01

    Full Text Available We test the hypothesis that differences in social status between groups of people within a population may induce variation in insulin-like growth factor-1(IGF-1 levels and, by extension, growth in height. This is called the community effect in height hypothesis. The relationship between IGF-1, assessed via finger-prick dried blood spot, and elite level sport competition outcomes were analysed for a sample of 116 undergraduate men and women. There was a statistically significant difference between winners and losers of a competition. Winners, as a group, had higher average pre-game and post-game IGF-1 levels than losers. We proposed this type of difference as a proxy for social dominance. We found no evidence that winners increased in IGF-1 levels over losers or that members of the same team were more similar in IGF-1 levels than they were to players from other teams. These findings provide limited support toward the community effect in height hypothesis. The findings are discussed in relation to the action of the growth hormone/IGF-1 axis as a transducer of multiple bio-social influences into a coherent signal which allows the growing human to adjust and adapt to local ecological conditions.

  12. Core-satellite species hypothesis and native versus exotic species in secondary succession

    Science.gov (United States)

    Martinez, Kelsey A.; Gibson, David J.; Middleton, Beth A.

    2015-01-01

    A number of hypotheses exist to explain species’ distributions in a landscape, but these hypotheses are not frequently utilized to explain the differences in native and exotic species distributions. The core-satellite species (CSS) hypothesis predicts species occupancy will be bimodally distributed, i.e., many species will be common and many species will be rare, but does not explicitly consider exotic species distributions. The parallel dynamics (PD) hypothesis predicts that regional occurrence patterns of exotic species will be similar to native species. Together, the CSS and PD hypotheses may increase our understanding of exotic species’ distribution relative to natives. We selected an old field undergoing secondary succession to study the CSS and PD hypotheses in conjunction with each other. The ratio of exotic to native species (richness and abundance) was observed through 17 years of secondary succession. We predicted species would be bimodally distributed and that exotic:native species ratios would remain steady or decrease through time under frequent disturbance. In contrast to the CSS and PD hypotheses, native species occupancies were not bimodally distributed at the site, but exotic species were. The exotic:native species ratios for both richness (E:Nrichness) and abundance (E:Ncover) generally decreased or remained constant throughout supporting the PD hypothesis. Our results suggest exotic species exhibit metapopulation structure in old field landscapes, but that metapopulation structures of native species are disrupted, perhaps because these species are dispersal limited in the fragmented landscape.

  13. Disability Prevalence According to a Class, Race, and Sex (CSR) Hypothesis.

    Science.gov (United States)

    Siordia, Carlos

    2015-09-01

    Disability has been shown to be related in definite ways to social class. In modern industrial societies, disability is influenced by and has the potential to contribute to the production and reproduction of social inequality. However, markers of social stratification processes are sometimes ignored determinants of health. A Class, Race, Sex (CRS) hypothesis is presented to argue that a "low-education disadvantage"; "racial-minority disadvantage"; and "female disadvantage" will compound to affect the risks for being disable. In particular, the CRS hypothesis posits that class is more important than race and the latter more than sex when predicting presence or severity of disability. The cross-sectional study of community-dwelling adults between the ages of 45 and 64 uses data from the American Community Survey (ACS) Public Use Microdata Sample (PUMS) 2008-2012 file. By using 3,429,523 individuals-which weighted equal to 61,726,420-the results of the study suggest the CRS hypothesis applies to both Non-Latino-Blacks and Non-Latino-Whites. There is a "male disadvantage" exception for Non-Latino-Whites. Decreasing between-group differences in health may be achieved by making the age-health association at lower socioeconomic stratum similar to that of the upper socioeconomic strata.

  14. Method of constructing a fundamental equation of state based on a scaling hypothesis

    Science.gov (United States)

    Rykov, V. A.; Rykov, S. V.; Kudryavtseva, I. V.; Sverdlov, A. V.

    2017-11-01

    The work studies the issues associated with the construction of the equation of state (EOS) taking due account of substance behavior in the critical region and associated with the scaling theory of critical phenomena (ST). The authors have developed a new version of the scaling hypothesis; this approach uses the following: a) substance equation of state having a form of a Schofield-Litster-Ho linear model (LM) and b) the Benedek hypothesis. The Benedek hypothesis has found a similar behavior character for a number of properties (isochoric and isobaric heat capacities, isothermal compressibility coefficient) at critical and near-critical isochors in the vicinity of the critical point. A method is proposed to build the fundamental equation of state (FEOS) which satisfies the ST power laws. The FEOS building method is verified by building the equation of state for argon within the state parameters range: up to 1000 MPa in terms of pressure, and from 83.056 К to 13000 К in terms of temperature. The executed comparison with the fundamental equations of state of Stewart-Jacobsen (1989), of Kozlov at al (1996), of Tegeler-Span-Wagner (1999), of has shown that the FEOS describes the known experimental data with an essentially lower error.

  15. Information filtering based on transferring similarity.

    Science.gov (United States)

    Sun, Duo; Zhou, Tao; Liu, Jian-Guo; Liu, Run-Ran; Jia, Chun-Xiao; Wang, Bing-Hong

    2009-07-01

    In this Brief Report, we propose an index of user similarity, namely, the transferring similarity, which involves all high-order similarities between users. Accordingly, we design a modified collaborative filtering algorithm, which provides remarkably higher accurate predictions than the standard collaborative filtering. More interestingly, we find that the algorithmic performance will approach its optimal value when the parameter, contained in the definition of transferring similarity, gets close to its critical value, before which the series expansion of transferring similarity is convergent and after which it is divergent. Our study is complementary to the one reported in [E. A. Leicht, P. Holme, and M. E. J. Newman, Phys. Rev. E 73, 026120 (2006)], and is relevant to the missing link prediction problem.

  16. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  17. Correlation between social proximity and mobility similarity.

    Science.gov (United States)

    Fan, Chao; Liu, Yiding; Huang, Junming; Rong, Zhihai; Zhou, Tao

    2017-09-20

    Human behaviors exhibit ubiquitous correlations in many aspects, such as individual and collective levels, temporal and spatial dimensions, content, social and geographical layers. With rich Internet data of online behaviors becoming available, it attracts academic interests to explore human mobility similarity from the perspective of social network proximity. Existent analysis shows a strong correlation between online social proximity and offline mobility similarity, namely, mobile records between friends are significantly more similar than between strangers, and those between friends with common neighbors are even more similar. We argue the importance of the number and diversity of common friends, with a counter intuitive finding that the number of common friends has no positive impact on mobility similarity while the diversity plays a key role, disagreeing with previous studies. Our analysis provides a novel view for better understanding the coupling between human online and offline behaviors, and will help model and predict human behaviors based on social proximity.

  18. Scalar Similarity for Relaxed Eddy Accumulation Methods

    Science.gov (United States)

    Ruppert, Johannes; Thomas, Christoph; Foken, Thomas

    2006-07-01

    The relaxed eddy accumulation (REA) method allows the measurement of trace gas fluxes when no fast sensors are available for eddy covariance measurements. The flux parameterisation used in REA is based on the assumption of scalar similarity, i.e., similarity of the turbulent exchange of two scalar quantities. In this study changes in scalar similarity between carbon dioxide, sonic temperature and water vapour were assessed using scalar correlation coefficients and spectral analysis. The influence on REA measurements was assessed by simulation. The evaluation is based on observations over grassland, irrigated cotton plantation and spruce forest. Scalar similarity between carbon dioxide, sonic temperature and water vapour showed a distinct diurnal pattern and change within the day. Poor scalar similarity was found to be linked to dissimilarities in the energy contained in the low frequency part of the turbulent spectra ( definition.

  19. Surf similarity and solitary wave runup

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.

    2008-01-01

    The notion of surf similarity in the runup of solitary waves is revisited. We show that the surf similarity parameter for solitary waves may be effectively reduced to the beach slope divided by the offshore wave height to depth ratio. This clarifies its physical interpretation relative to a previ...... functional dependence on their respective surf similarity parameters. Important equivalencies in the runup of sinusoidal and solitary waves are thus revealed.......The notion of surf similarity in the runup of solitary waves is revisited. We show that the surf similarity parameter for solitary waves may be effectively reduced to the beach slope divided by the offshore wave height to depth ratio. This clarifies its physical interpretation relative...... to a previous parameterization, which was not given in an explicit form. Good coherency with experimental (breaking) runup data is preserved with this simpler parameter. A recasting of analytical (nonbreaking) runup expressions for sinusoidal and solitary waves additionally shows that they contain identical...

  20. Similarity in Bilateral Isolated Internal Orbital Fractures.

    Science.gov (United States)

    Chen, Hung-Chang; Cox, Jacob T; Sanyal, Abanti; Mahoney, Nicholas R

    2018-04-13

    In evaluating patients sustaining bilateral isolated internal orbital fractures, the authors have observed both similar fracture locations and also similar expansion of orbital volumes. In this study, we aim to investigate if there is a propensity for the 2 orbits to fracture in symmetrically similar patterns when sustaining similar trauma. A retrospective chart review was performed studying all cases at our institution of bilateral isolated internal orbital fractures involving the medial wall and/or the floor at the time of presentation. The similarity of the bilateral fracture locations was evaluated using the Fisher's exact test. The bilateral expanded orbital volumes were analyzed using the Wilcoxon signed-rank test to assess for orbital volume similarity. Twenty-four patients with bilateral internal orbital fractures were analyzed for fracture location similarity. Seventeen patients (70.8%) had 100% concordance in the orbital subregion fractured, and the association between the right and the left orbital fracture subregion locations was statistically significant (P < 0.0001). Fifteen patients were analyzed for orbital volume similarity. The average orbital cavity volume was 31.2 ± 3.8 cm on the right and 32.0 ± 3.7 cm on the left. There was a statistically significant difference between right and left orbital cavity volumes (P = 0.0026). The data from this study suggest that an individual who suffers isolated bilateral internal orbital fractures has a statistically significant similarity in the location of their orbital fractures. However, there does not appear to be statistically significant similarity in the expansion of the orbital volumes in these patients.

  1. Measure of Node Similarity in Multilayer Networks.

    Directory of Open Access Journals (Sweden)

    Anders Mollgaard

    Full Text Available The weight of links in a network is often related to the similarity of the nodes. Here, we introduce a simple tunable measure for analysing the similarity of nodes across different link weights. In particular, we use the measure to analyze homophily in a group of 659 freshman students at a large university. Our analysis is based on data obtained using smartphones equipped with custom data collection software, complemented by questionnaire-based data. The network of social contacts is represented as a weighted multilayer network constructed from different channels of telecommunication as well as data on face-to-face contacts. We find that even strongly connected individuals are not more similar with respect to basic personality traits than randomly chosen pairs of individuals. In contrast, several socio-demographics variables have a significant degree of similarity. We further observe that similarity might be present in one layer of the multilayer network and simultaneously be absent in the other layers. For a variable such as gender, our measure reveals a transition from similarity between nodes connected with links of relatively low weight to dis-similarity for the nodes connected by the strongest links. We finally analyze the overlap between layers in the network for different levels of acquaintanceships.

  2. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  3. Trajectory similarity join in spatial networks

    KAUST Repository

    Shang, Shuo

    2017-09-07

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider the case of trajectory similarity join (TS-Join), where the objects are trajectories of vehicles moving in road networks. Thus, given two sets of trajectories and a threshold θ, the TS-Join returns all pairs of trajectories from the two sets with similarity above θ. This join targets applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide a purposeful definition of similarity. To enable efficient TS-Join processing on large sets of trajectories, we develop search space pruning techniques and take into account the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer algorithm. For each trajectory, the algorithm first finds similar trajectories. Then it merges the results to achieve a final result. The algorithm exploits an upper bound on the spatiotemporal similarity and a heuristic scheduling strategy for search space pruning. The algorithm\\'s per-trajectory searches are independent of each other and can be performed in parallel, and the merging has constant cost. An empirical study with real data offers insight in the performance of the algorithm and demonstrates that is capable of outperforming a well-designed baseline algorithm by an order of magnitude.

  4. The baryonic self similarity of dark matter

    International Nuclear Information System (INIS)

    Alard, C.

    2014-01-01

    The cosmological simulations indicates that dark matter halos have specific self-similar properties. However, the halo similarity is affected by the baryonic feedback. By using momentum-driven winds as a model to represent the baryon feedback, an equilibrium condition is derived which directly implies the emergence of a new type of similarity. The new self-similar solution has constant acceleration at a reference radius for both dark matter and baryons. This model receives strong support from the observations of galaxies. The new self-similar properties imply that the total acceleration at larger distances is scale-free, the transition between the dark matter and baryons dominated regime occurs at a constant acceleration, and the maximum amplitude of the velocity curve at larger distances is proportional to M 1/4 . These results demonstrate that this self-similar model is consistent with the basics of modified Newtonian dynamics (MOND) phenomenology. In agreement with the observations, the coincidence between the self-similar model and MOND breaks at the scale of clusters of galaxies. Some numerical experiments show that the behavior of the density near the origin is closely approximated by a Einasto profile.

  5. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  6. Neural Global Pattern Similarity Underlies True and False Memories.

    Science.gov (United States)

    Ye, Zhifang; Zhu, Bi; Zhuang, Liping; Lu, Zhonglin; Chen, Chuansheng; Xue, Gui

    2016-06-22

    The neural processes giving rise to human memory strength signals remain poorly understood. Inspired by formal computational models that posit a central role of global matching in memory strength, we tested a novel hypothesis that the strengths of both true and false memories arise from the global similarity of an item's neural activation pattern during retrieval to that of all the studied items during encoding (i.e., the encoding-retrieval neural global pattern similarity [ER-nGPS]). We revealed multiple ER-nGPS signals that carried distinct information and contributed differentially to true and false memories: Whereas the ER-nGPS in the parietal regions reflected semantic similarity and was scaled with the recognition strengths of both true and false memories, ER-nGPS in the visual cortex contributed solely to true memory. Moreover, ER-nGPS differences between the parietal and visual cortices were correlated with frontal monitoring processes. By combining computational and neuroimaging approaches, our results advance a mechanistic understanding of memory strength in recognition. What neural processes give rise to memory strength signals, and lead to our conscious feelings of familiarity? Using fMRI, we found that the memory strength of a given item depends not only on how it was encoded during learning, but also on the similarity of its neural representation with other studied items. The global neural matching signal, mainly in the parietal lobule, could account for the memory strengths of both studied and unstudied items. Interestingly, a different global matching signal, originated from the visual cortex, could distinguish true from false memories. The findings reveal multiple neural mechanisms underlying the memory strengths of events registered in the brain. Copyright © 2016 the authors 0270-6474/16/366792-11$15.00/0.

  7. Functional diversity supports the physiological tolerance hypothesis for plant species richness along climatic gradients

    Science.gov (United States)

    Spasojevic, Marko J.; Grace, James B.; Harrison, Susan; Damschen, Ellen Ingman

    2013-01-01

    1. The physiological tolerance hypothesis proposes that plant species richness is highest in warm and/or wet climates because a wider range of functional strategies can persist under such conditions. Functional diversity metrics, combined with statistical modeling, offer new ways to test whether diversity-environment relationships are consistent with this hypothesis. 2. In a classic study by R. H. Whittaker (1960), herb species richness declined from mesic (cool, moist, northerly) slopes to xeric (hot, dry, southerly) slopes. Building on this dataset, we measured four plant functional traits (plant height, specific leaf area, leaf water content and foliar C:N) and used them to calculate three functional diversity metrics (functional richness, evenness, and dispersion). We then used a structural equation model to ask if ‘functional diversity’ (modeled as the joint responses of richness, evenness, and dispersion) could explain the observed relationship of topographic climate gradients to species richness. We then repeated our model examining the functional diversity of each of the four traits individually. 3. Consistent with the physiological tolerance hypothesis, we found that functional diversity was higher in more favorable climatic conditions (mesic slopes), and that multivariate functional diversity mediated the relationship of the topographic climate gradient to plant species richness. We found similar patterns for models focusing on individual trait functional diversity of leaf water content and foliar C:N. 4. Synthesis. Our results provide trait-based support for the physiological tolerance hypothesis, suggesting that benign climates support more species because they allow for a wider range of functional strategies.

  8. Testing the assumptions of the pyrodiversity begets biodiversity hypothesis for termites in semi-arid Australia.

    Science.gov (United States)

    Davis, Hayley; Ritchie, Euan G; Avitabile, Sarah; Doherty, Tim; Nimmo, Dale G

    2018-04-01

    Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species' probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary.

  9. A Similarity Search Using Molecular Topological Graphs

    Directory of Open Access Journals (Sweden)

    Yoshifumi Fukunishi

    2009-01-01

    Full Text Available A molecular similarity measure has been developed using molecular topological graphs and atomic partial charges. Two kinds of topological graphs were used. One is the ordinary adjacency matrix and the other is a matrix which represents the minimum path length between two atoms of the molecule. The ordinary adjacency matrix is suitable to compare the local structures of molecules such as functional groups, and the other matrix is suitable to compare the global structures of molecules. The combination of these two matrices gave a similarity measure. This method was applied to in silico drug screening, and the results showed that it was effective as a similarity measure.

  10. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  11. A Hypothesis-Driven Approach to Site Investigation

    Science.gov (United States)

    Nowak, W.

    2008-12-01

    Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle

  12. Prospective detection of large prediction errors: a hypothesis testing approach

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss

  13. Mismatch or cumulative stress : Toward an integrated hypothesis of programming effects

    NARCIS (Netherlands)

    Nederhof, Esther; Schmidt, Mathias V.

    2012-01-01

    This paper integrates the cumulative stress hypothesis with the mismatch hypothesis, taking into account individual differences in sensitivity to programming. According to the cumulative stress hypothesis, individuals are more likely to suffer from disease as adversity accumulates. According to the

  14. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation.......Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...

  15. Abundance estimation of spectrally similar minerals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This paper evaluates a spectral unmixing method for estimating the partial abundance of spectrally similar minerals in complex mixtures. The method requires formulation of a linear function of individual spectra of individual minerals. The first...

  16. Lagrangian-similarity diffusion-deposition model

    International Nuclear Information System (INIS)

    Horst, T.W.

    1979-01-01

    A Lagrangian-similarity diffusion model has been incorporated into the surface-depletion deposition model. This model predicts vertical concentration profiles far downwind of the source that agree with those of a one-dimensional gradient-transfer model

  17. Discovering Music Structure via Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Parrado-Hernandez, Emilio; Meng, Anders

    Automatic methods for music navigation and music recommendation exploit the structure in the music to carry out a meaningful exploration of the “song space”. To get a satisfactory performance from such systems, one should incorporate as much information about songs similarity as possible; however...... semantics”, in such a way that all observed similarities can be satisfactorily explained using the latent semantics. Therefore, one can think of these semantics as the real structure in music, in the sense that they can explain the observed similarities among songs. The suitability of the PLSA model...... for representing music structure is studied in a simplified scenario consisting of 4412 songs and two similarity measures among them. The results suggest that the PLSA model is a useful framework to combine different sources of information, and provides a reasonable space for song representation....

  18. Outsourced similarity search on metric data assets

    KAUST Repository

    Yiu, Man Lung

    2012-02-01

    This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example. Outsourcing offers the data owner scalability and a low-initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying it to the service provider for similarity queries on the transformed data. Our techniques provide interesting trade-offs between query cost and accuracy. They are then further extended to offer an intuitive privacy guarantee. Empirical studies with real data demonstrate that the techniques are capable of offering privacy while enabling efficient and accurate processing of similarity queries.

  19. Protein structural similarity search by Ramachandran codes

    Directory of Open Access Journals (Sweden)

    Chang Chih-Hung

    2007-08-01

    Full Text Available Abstract Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation. SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era.

  20. Similarity search processing. Paralelization and indexing technologies.

    Directory of Open Access Journals (Sweden)

    Eder Dos Santos

    2015-08-01

    The next Scientific-Technical Report addresses the similarity search and the implementation of metric structures on parallel environments. It also presents the state of the art related to similarity search on metric structures and parallelism technologies. Comparative analysis are also proposed, seeking to identify the behavior of a set of metric spaces and metric structures over processing platforms multicore-based and GPU-based.

  1. Parallel trajectory similarity joins in spatial networks

    KAUST Repository

    Shang, Shuo

    2018-04-04

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.

  2. Parallel trajectory similarity joins in spatial networks

    KAUST Repository

    Shang, Shuo; Chen, Lisi; Wei, Zhewei; Jensen, Christian S.; Zheng, Kai; Kalnis, Panos

    2018-01-01

    The matching of similar pairs of objects, called similarity join, is fundamental functionality in data management. We consider two cases of trajectory similarity joins (TS-Joins), including a threshold-based join (Tb-TS-Join) and a top-k TS-Join (k-TS-Join), where the objects are trajectories of vehicles moving in road networks. Given two sets of trajectories and a threshold θ, the Tb-TS-Join returns all pairs of trajectories from the two sets with similarity above θ. In contrast, the k-TS-Join does not take a threshold as a parameter, and it returns the top-k most similar trajectory pairs from the two sets. The TS-Joins target diverse applications such as trajectory near-duplicate detection, data cleaning, ridesharing recommendation, and traffic congestion prediction. With these applications in mind, we provide purposeful definitions of similarity. To enable efficient processing of the TS-Joins on large sets of trajectories, we develop search space pruning techniques and enable use of the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer search framework that lays the foundation for the algorithms for the Tb-TS-Join and the k-TS-Join that rely on different pruning techniques to achieve efficiency. For each trajectory, the algorithms first find similar trajectories. Then they merge the results to obtain the final result. The algorithms for the two joins exploit different upper and lower bounds on the spatiotemporal trajectory similarity and different heuristic scheduling strategies for search space pruning. Their per-trajectory searches are independent of each other and can be performed in parallel, and the mergings have constant cost. An empirical study with real data offers insight in the performance of the algorithms and demonstrates that they are capable of outperforming well-designed baseline algorithms by an order of magnitude.

  3. Are calanco landforms similar to river basins?

    Science.gov (United States)

    Caraballo-Arias, N A; Ferro, V

    2017-12-15

    In the past badlands have been often considered as ideal field laboratories for studying landscape evolution because of their geometrical similarity to larger fluvial systems. For a given hydrological process, no scientific proof exists that badlands can be considered a model of river basin prototypes. In this paper the measurements carried out on 45 Sicilian calanchi, a type of badlands that appears as a small-scale hydrographic unit, are used to establish their morphological similarity with river systems whose data are available in the literature. At first the geomorphological similarity is studied by identifying the dimensionless groups, which can assume the same value or a scaled one in a fixed ratio, representing drainage basin shape, stream network and relief properties. Then, for each property, the dimensionless groups are calculated for the investigated calanchi and the river basins and their corresponding scale ratio is evaluated. The applicability of Hack's, Horton's and Melton's laws for establishing similarity criteria is also tested. The developed analysis allows to conclude that a quantitative morphological similarity between calanco landforms and river basins can be established using commonly applied dimensionless groups. In particular, the analysis showed that i) calanchi and river basins have a geometrically similar shape respect to the parameters Rf and Re with a scale factor close to 1, ii) calanchi and river basins are similar respect to the bifurcation and length ratios (λ=1), iii) for the investigated calanchi the Melton number assumes values less than that (0.694) corresponding to the river case and a scale ratio ranging from 0.52 and 0.78 can be used, iv) calanchi and river basins have similar mean relief ratio values (λ=1.13) and v) calanchi present active geomorphic processes and therefore fall in a more juvenile stage with respect to river basins. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. COMPOSITIONAL SIMILARITIES AND DISTINCTIONS BETWEEN TITAN’S EVAPORITIC TERRAINS

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, S. M.; Barnes, Jason W., E-mail: mack3108@vandals.uidaho.edu [Department of Physics, University of Idaho, Moscow, ID 83844-0903 (United States)

    2016-04-10

    We document the similarities in composition between the equatorial basins Tui Regio, Hotei Regio, and other 5-μm-bright materials, notably the north polar evaporites, by investigating the presence and extent of an absorption feature at 4.92 μm. In most observations, Woytchugga Lacuna, Ontario Lacus, MacKay Lacus, deposits near Fensal, some of the lakes and dry lake beds south of Ligeia, and the southern shores of Kraken Mare share the absorption feature at 4.92 μm observed in the spectra of Tui and Hotei. Besides Woytchugga and at Fensal, these 5-μm-bright deposits are geomorphologically substantiated evaporites. Thus, the similarity in composition strengthens the hypothesis that Tui and Hotei once contained liquid. Other evaporite deposits, however, do not show the 4.92 μm absorption, notably Muggel Lacus and the shores of Ligeia Mare at the north pole. This difference in composition suggests that there is more than one kind of soluble material in Titan’s lakes that can create evaporite and/or that the surface properties at the Visual and Infrared Mapping Spectrometer wavelength scale are not uniform between the different deposits (crystal size, abundance, etc.). Our results indicate that the surface structure, composition, and formation history of Titan’s evaporites may be at least as dynamic and complex as their Earth counterparts.

  5. The Value Capture Brand Leaders by Own Brands. An Exploratory Study on Packaging Similarity

    Directory of Open Access Journals (Sweden)

    Lívia Rufino Bambuy

    2014-12-01

    Full Text Available Retail own brand are presenting growth, even with small investment in promotion. In this context, packaging can be a fundamental element in the communication of these offers to the consumer. Starting from the hypothesis that own brands adopt packaging strategies similar to the leading brands, aiming to capture brand equity, an offer set of 13 categories gathered from 3 retail chains that offer own brands was compared and attributed similarity scores. Using content analysis result in identifying high degree of similarity, when comparing own brand packaging to the leading brands of each category.

  6. An omnibus test for the global null hypothesis.

    Science.gov (United States)

    Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja

    2018-01-01

    Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.

  7. Discrete causal theory emergent spacetime and the causal metric hypothesis

    CERN Document Server

    Dribus, Benjamin F

    2017-01-01

    This book evaluates and suggests potentially critical improvements to causal set theory, one of the best-motivated approaches to the outstanding problems of fundamental physics. Spacetime structure is of central importance to physics beyond general relativity and the standard model. The causal metric hypothesis treats causal relations as the basis of this structure. The book develops the consequences of this hypothesis under the assumption of a fundamental scale, with smooth spacetime geometry viewed as emergent. This approach resembles causal set theory, but differs in important ways; for example, the relative viewpoint, emphasizing relations between pairs of events, and relationships between pairs of histories, is central. The book culminates in a dynamical law for quantum spacetime, derived via generalized path summation.

  8. A test of the domain-specific acculturation strategy hypothesis.

    Science.gov (United States)

    Miller, Matthew J; Yang, Minji; Lim, Robert H; Hui, Kayi; Choi, Na-Yeun; Fan, Xiaoyan; Lin, Li-Ling; Grome, Rebekah E; Farrell, Jerome A; Blackmon, Sha'kema

    2013-01-01

    Acculturation literature has evolved over the past several decades and has highlighted the dynamic ways in which individuals negotiate experiences in multiple cultural contexts. The present study extends this literature by testing M. J. Miller and R. H. Lim's (2010) domain-specific acculturation strategy hypothesis-that individuals might use different acculturation strategies (i.e., assimilated, bicultural, separated, and marginalized strategies; J. W. Berry, 2003) across behavioral and values domains-in 3 independent cluster analyses with Asian American participants. Present findings supported the domain-specific acculturation strategy hypothesis as 67% to 72% of participants from 3 independent samples using different strategies across behavioral and values domains. Consistent with theory, a number of acculturation strategy cluster group differences emerged across generational status, acculturative stress, mental health symptoms, and attitudes toward seeking professional psychological help. Study limitations and future directions for research are discussed.

  9. Teen Fertility and Gender Inequality in Education: A Contextual Hypothesis

    Directory of Open Access Journals (Sweden)

    C. Shannon Stokes

    2004-12-01

    Full Text Available Previous studies in developed countries have found a micro-level association between teenage fertility and girls' educational attainment but researchers still debate the policy implications of these associations. First, are these associations causal? Second, are they substantively important enough, at the macro-level, to warrant policy attention? In other words, how much would policy efforts to reduce unintended pregnancy among teens pay off in terms of narrowing national gender gaps in educational attainment? Third, under what contexts are these payoffs likely to be important? This paper focuses on the latter two questions. We begin by proposing a contextual hypothesis to explain cross-national variation in the gender-equity payoffs from reducing unintended teen fertility. We then test this hypothesis, using DHS data from 38 countries.

  10. The functional matrix hypothesis revisited. 3. The genomic thesis.

    Science.gov (United States)

    Moss, M L

    1997-09-01

    Although the initial versions of the functional matrix hypothesis (FMH) theoretically posited the ontogenetic primacy of "function," it is only in recent years that advances in the morphogenetic, engineering, and computer sciences provided an integrated experimental and numerical data base that permitted recent significant revisions of the FMH--revisions that strongly support the primary role of function in craniofacial growth and development. Acknowledging that the currently dominant scientific paradigm suggests that genomic, instead of epigenetic (functional) factors, regulate (cause, control) such growth, an analysis of this continuing controversy was deemed useful. Accordingly the method of dialectical analysis, is employed, stating a thesis, an antithesis, and a resolving synthesis based primarily on an extensive review of the pertinent current literature. This article extensively reviews the genomic hypothesis and offers a critique intended to remove some of the unintentional conceptual obscurantism that has recently come to surround it.

  11. Amyloid cascade hypothesis: Pathogenesis and therapeutic strategies in Alzheimer's disease.

    Science.gov (United States)

    Barage, Sagar H; Sonawane, Kailas D

    2015-08-01

    Alzheimer's disease is an irreversible, progressive neurodegenerative disorder. Various therapeutic approaches are being used to improve the cholinergic neurotransmission, but their role in AD pathogenesis is still unknown. Although, an increase in tau protein concentration in CSF has been described in AD, but several issues remains unclear. Extensive and accurate analysis of CSF could be helpful to define presence of tau proteins in physiological conditions, or released during the progression of neurodegenerative disease. The amyloid cascade hypothesis postulates that the neurodegeneration in AD caused by abnormal accumulation of amyloid beta (Aβ) plaques in various areas of the brain. The amyloid hypothesis has continued to gain support over the last two decades, particularly from genetic studies. Therefore, current research progress in several areas of therapies shall provide an effective treatment to cure this devastating disease. This review critically evaluates general biochemical and physiological functions of Aβ directed therapeutics and their relevance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Neuromuscular deficits after peripheral joint injury: a neurophysiological hypothesis.

    Science.gov (United States)

    Ward, Sarah; Pearce, Alan J; Pietrosimone, Brian; Bennell, Kim; Clark, Ross; Bryant, Adam L

    2015-03-01

    In addition to biomechanical disturbances, peripheral joint injuries (PJIs) can also result in chronic neuromuscular alterations due in part to loss of mechanoreceptor-mediated afferent feedback. An emerging perspective is that PJI should be viewed as a neurophysiological dysfunction, not simply a local injury. Neurophysiological and neuroimaging studies have provided some evidence for central nervous system (CNS) reorganization at both the cortical and spinal levels after PJI. The novel hypothesis proposed is that CNS reorganization is the underlying mechanism for persisting neuromuscular deficits after injury, particularly muscle weakness. There is a lack of direct evidence to support this hypothesis, but future studies utilizing force-matching tasks with superimposed transcranial magnetic stimulation may be help clarify this notion. © 2014 Wiley Periodicals, Inc.

  13. The estrogen hypothesis of schizophrenia implicates glucose metabolism

    DEFF Research Database (Denmark)

    Olsen, Line; Hansen, Thomas; Jakobsen, Klaus D

    2008-01-01

    expression studies have indicated an equally large set of candidate genes that only partially overlap linkage genes. A thorough assessment, beyond the resolution of current GWA studies, of the disease risk conferred by the numerous schizophrenia candidate genes is a daunting and presently not feasible task....... We undertook these challenges by using an established clinical paradigm, the estrogen hypothesis of schizophrenia, as the criterion to select candidates among the numerous genes experimentally implicated in schizophrenia. Bioinformatic tools were used to build and priorities the signaling networks...... implicated by the candidate genes resulting from the estrogen selection. We identified ten candidate genes using this approach that are all active in glucose metabolism and particularly in the glycolysis. Thus, we tested the hypothesis that variants of the glycolytic genes are associated with schizophrenia...

  14. Energy prices, multiple structural breaks, and efficient market hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chien-Chiang; Lee, Jun-De [Department of Applied Economics, National Chung Hsing University, Taichung (China)

    2009-04-15

    This paper investigates the efficient market hypothesis using total energy price and four kinds of various disaggregated energy prices - coal, oil, gas, and electricity - for OECD countries over the period 1978-2006. We employ a highly flexible panel data stationarity test of Carrion-i-Silvestre et al. [Carrion-i-Silvestre JL, Del Barrio-Castro T, Lopez-Bazo E. Breaking the panels: an application to GDP per capita. J Econometrics 2005;8:159-75], which incorporates multiple shifts in level and slope, thereby controlling for cross-sectional dependence through bootstrap methods. Overwhelming evidence in favor of the broken stationarity hypothesis is found, implying that energy prices are not characterized by an efficient market. Thus, it shows the presence of profitable arbitrage opportunities among energy prices. The estimated breaks are meaningful and coincide with the most critical events which affected the energy prices. (author)

  15. Energy prices, multiple structural breaks, and efficient market hypothesis

    International Nuclear Information System (INIS)

    Lee, Chien-Chiang; Lee, Jun-De

    2009-01-01

    This paper investigates the efficient market hypothesis using total energy price and four kinds of various disaggregated energy prices - coal, oil, gas, and electricity - for OECD countries over the period 1978-2006. We employ a highly flexible panel data stationarity test of Carrion-i-Silvestre et al. [Carrion-i-Silvestre JL, Del Barrio-Castro T, Lopez-Bazo E. Breaking the panels: an application to GDP per capita. J Econometrics 2005;8:159-75], which incorporates multiple shifts in level and slope, thereby controlling for cross-sectional dependence through bootstrap methods. Overwhelming evidence in favor of the broken stationarity hypothesis is found, implying that energy prices are not characterized by an efficient market. Thus, it shows the presence of profitable arbitrage opportunities among energy prices. The estimated breaks are meaningful and coincide with the most critical events which affected the energy prices. (author)

  16. The hubris hypothesis: The downside of comparative optimism displays.

    Science.gov (United States)

    Hoorens, Vera; Van Damme, Carolien; Helweg-Larsen, Marie; Sedikides, Constantine

    2017-04-01

    According to the hubris hypothesis, observers respond more unfavorably to individuals who express their positive self-views comparatively than to those who express their positive self-views non-comparatively, because observers infer that the former hold a more disparaging view of others and particularly of observers. Two experiments extended the hubris hypothesis in the domain of optimism. Observers attributed less warmth (but not less competence) to, and showed less interest in affiliating with, an individual displaying comparative optimism (the belief that one's future will be better than others' future) than with an individual displaying absolute optimism (the belief that one's future will be good). Observers responded differently to individuals displaying comparative versus absolute optimism, because they inferred that the former held a gloomier view of the observers' future. Consistent with previous research, observers still attributed more positive traits to a comparative or absolute optimist than to a comparative or absolute pessimist. Copyright © 2016. Published by Elsevier Inc.

  17. Towards the proof of the cosmic censorship hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Krolak, Andrzej

    1986-05-01

    An attempt is made to formulate the cosmic censorship hypothesis put forward by Penrose (1969, Riv. Nuovo Cimento Ser. 1 Num. Spec. 1 252) as a theorem which could be subject to mathematical proof. It is proved that a weakly asymptotically simple and empty spacetime must be future asymptotically predictable if the energy and the strong causality conditions hold and either all singularities are of Tipler's strong curvature type and once singularity occurs there exists a marginally outgoing null geodesic or each singularity is preceded by the occurrence of a closed trapped surface. The marginally outgoing null geodesics may not be admitted by general naked singularities. However, it is shown that they occur if on the Cauchy horizon the global hyperbolicity in violated is such a way that causal simplicity does not hold. This means that a wide class of nakedly singular spacetimes is considered. This result gives some support to the validity of Penrose's hypothesis.

  18. Hyman Minsky's financial instability hypothesis and the Greek debt crisis

    Directory of Open Access Journals (Sweden)

    Sergey Beshenov

    2015-12-01

    Full Text Available This article attempts to analyze the current debt crisis in Greece based on the financial instability hypothesis developed by Hyman Minsky. This article shows that the hypothesis provides an understanding of how an economy endogenously becomes “financially fragile” and thus prone to crises. The authors analyze how public and private sector behavior in the Greek economy led to the country's debt crisis. In particular, based on a sample of 36 Greek companies, the authors show that between 2001 and 2014, the majority of those companies had switched to fragile financial structures. Special attention is devoted to the negative consequences of applying the neoclassical doctrine of “austerity measures” in Greece as the principal “anti-crisis” concept of mainstream economic science.

  19. Isotopic Resonance Hypothesis: Experimental Verification by Escherichia coli Growth Measurements

    Science.gov (United States)

    Xie, Xueshu; Zubarev, Roman A.

    2015-03-01

    Isotopic composition of reactants affects the rates of chemical and biochemical reactions. As a rule, enrichment of heavy stable isotopes leads to progressively slower reactions. But the recent isotopic resonance hypothesis suggests that the dependence of the reaction rate upon the enrichment degree is not monotonous. Instead, at some ``resonance'' isotopic compositions, the kinetics increases, while at ``off-resonance'' compositions the same reactions progress slower. To test the predictions of this hypothesis for the elements C, H, N and O, we designed a precise (standard error +/-0.05%) experiment that measures the parameters of bacterial growth in minimal media with varying isotopic composition. A number of predicted resonance conditions were tested, with significant enhancements in kinetics discovered at these conditions. The combined statistics extremely strongly supports the validity of the isotopic resonance phenomenon (p biotechnology, medicine, chemistry and other areas.

  20. Towards the proof of the cosmic censorship hypothesis

    International Nuclear Information System (INIS)

    Krolak, Andrzej

    1986-01-01

    An attempt is made to formulate the cosmic censorship hypothesis put forward by Penrose [1969, Riv. Nuovo Cimento Ser. 1 Num. Spec. 1 252] as a theorem which could be subject to mathematical proof. It is proved that a weakly asymptotically simple and empty spacetime must be future asymptotically predictable if the energy and the strong causality conditions hold and either all singularities are of Tipler's strong curvature type and once singularity occurs there exists a marginally outgoing null geodesic or each singularity is preceded by the occurrence of a closed trapped surface. The marginally outgoing null geodesics may not be admitted by general naked singularities. However, it is shown that they occur if on the Cauchy horizon the global hyperbolicity in violated is such a way that causal simplicity does not hold. This means that a wide class of nakedly singular spacetimes is considered. This result gives some support to the validity of Penrose's hypothesis. (author)

  1. Hypothesis testing in students: Sequences, stages, and instructional strategies

    Science.gov (United States)

    Moshman, David; Thompson, Pat A.

    Six sequences in the development of hypothesis-testing conceptions are proposed, involving (a) interpretation of the hypothesis; (b) the distinction between using theories and testing theories; (c) the consideration of multiple possibilities; (d) the relation of theory and data; (e) the nature of verification and falsification; and (f) the relation of truth and falsity. An alternative account is then provided involving three global stages: concrete operations, formal operations, and a postformal metaconstructivestage. Relative advantages and difficulties of the stage and sequence conceptualizations are discussed. Finally, three families of teaching strategy are distinguished, which emphasize, respectively: (a) social transmission of knowledge; (b) carefully sequenced empirical experience by the student; and (c) self-regulated cognitive activity of the student. It is argued on the basis of Piaget's theory that the last of these plays a crucial role in the construction of such logical reasoning strategies as those involved in testing hypotheses.

  2. Why is muscularity sexy? Tests of the fitness indicator hypothesis.

    Science.gov (United States)

    Frederick, David A; Haselton, Martie G

    2007-08-01

    Evolutionary scientists propose that exaggerated secondary sexual characteristics are cues of genes that increase offspring viability or reproductive success. In six studies the hypothesis that muscularity is one such cue is tested. As predicted, women rate muscular men as sexier, more physically dominant and volatile, and less committed to their mates than nonmuscular men. Consistent with the inverted-U hypothesis of masculine traits, men with moderate muscularity are rated most attractive. Consistent with past research on fitness cues, across two measures, women indicate that their most recent short-term sex partners were more muscular than their other sex partners (ds = .36, .47). Across three studies, when controlling for other characteristics (e.g., body fat), muscular men rate their bodies as sexier to women (partial rs = .49-.62) and report more lifetime sex partners (partial rs = .20-.27), short-term partners (partial rs = .25-.28), and more affairs with mated women (partial r = .28).

  3. Hypothesis driven development of new adjuvants: short peptides as immunomodulators.

    Science.gov (United States)

    Dong, Jessica C; Kobinger, Gary P

    2013-04-01

    To date, vaccinations have been one of the key strategies in the prevention and protection against infectious pathogens. Traditional vaccines have well-known limitations such as safety and efficacy issues, which consequently deems it inappropriate for particular populations and may not be an effective strategy against all pathogens. This evidence highlights the need to develop more efficacious vaccination regiments. Higher levels of protection can be achieved by the addition of immunostimulating adjuvants. Many adjuvants elicit strong, undefined inflammation, which produces increased immunogenicity but may also lead to undesirable effects. Hypothesis driven development of adjuvants is needed to achieve a more specific and directed immune response required for optimal and safe vaccine-induced immune protection. An example of such hypothesis driven development includes the use of short immunomodulating peptides as adjuvants. These peptides have the ability to influence the immune response and can be extrapolated for adjuvant use, but requires further investigation.

  4. Glaucoma--diabetes of the brain: a radical hypothesis about its nature and pathogenesis.

    Science.gov (United States)

    Faiq, Muneeb A; Dada, Rima; Saluja, Daman; Dada, Tanuj

    2014-05-01

    Glaucoma is the leading cause of irreversible blindness characterized by irremediable loss of retinal ganglion cells. Its risk increases with progressing age and elevated intraocular pressure. Studies have established that glaucoma is a neurodegenerative disorder in which the damage involves many brain tissues from retina to the lateral geniculate nucleus. Despite lot of research, complete pathomechanism of glaucoma is not known and there is no treatment available except modification of intraocular pressure pharmacologically and/or surgically. We here present a hypothesis inspired by studies across many areas of molecular and clinical sciences in an integrative manner that leads to a uniquely unconventional understanding of this disorder. Our hypothesis postulates that glaucoma may possibly be the diabetes of the brain. Based on the remarkable similarities between glaucoma and diabetes we propose glaucoma also to be a type of diabetes. Glaucoma and diabetes share many aspects from various molecular mechanisms to involvement of insulin and possible use of antidiabetics in glaucoma therapy. Additionally, Alzheimer's disease has already been proposed to be diabetes type-3. We show that Alzheimer's disease is cerebral glaucoma and diabetes at the same time which, by transitive property of similarities, again leads to our hypothesis that glaucoma is diabetes of the brain. Our proposition may lead to appreciation of certain important facets of glaucoma which have previously not been given due consideration. It also may lead to an alternative classification of diabetes as pancreatic and brain diabetes thereby widening the vision arena of the understanding of both these disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Testing the Binary Hypothesis: Pulsar Timing Constraints on Supermassive Black Hole Binary Candidates

    Science.gov (United States)

    Sesana, Alberto; Haiman, Zoltán; Kocsis, Bence; Kelley, Luke Zoltan

    2018-03-01

    The advent of time domain astronomy is revolutionizing our understanding of the universe. Programs such as the Catalina Real-time Transient Survey (CRTS) or the Palomar Transient Factory (PTF) surveyed millions of objects for several years, allowing variability studies on large statistical samples. The inspection of ≈250 k quasars in CRTS resulted in a catalog of 111 potentially periodic sources, put forward as supermassive black hole binary (SMBHB) candidates. A similar investigation on PTF data yielded 33 candidates from a sample of ≈35 k quasars. Working under the SMBHB hypothesis, we compute the implied SMBHB merger rate and we use it to construct the expected gravitational wave background (GWB) at nano-Hz frequencies, probed by pulsar timing arrays (PTAs). After correcting for incompleteness and assuming virial mass estimates, we find that the GWB implied by the CRTS sample exceeds the current most stringent PTA upper limits by almost an order of magnitude. After further correcting for the implicit bias in virial mass measurements, the implied GWB drops significantly but is still in tension with the most stringent PTA upper limits. Similar results hold for the PTF sample. Bayesian model selection shows that the null hypothesis (whereby the candidates are false positives) is preferred over the binary hypothesis at about 2.3σ and 3.6σ for the CRTS and PTF samples respectively. Although not decisive, our analysis highlights the potential of PTAs as astrophysical probes of individual SMBHB candidates and indicates that the CRTS and PTF samples are likely contaminated by several false positives.

  6. SU(7) GUT and evasion of the survival hypothesis

    International Nuclear Information System (INIS)

    Umemura, I.; Yamamoto, K.

    1981-01-01

    Characteristic features of an SU(7) GUT are discussed, in which the fundamental representation 7 consists of SU(5)'5 and its two singlets with charge q = +-1/2. The so-called survival hypothesis for fermions is naturally evaded by a kind of electric charge conservation due to q = +-1/2, and a brief comment on the suppression of the νsub(e) mass is given also. (orig.)

  7. Statefinder diagnostic for cosmology with the abnormally weighting energy hypothesis

    International Nuclear Information System (INIS)

    Liu Daojun; Liu Weizhong

    2008-01-01

    In this paper, we apply the statefinder diagnostic to the cosmology with the abnormally weighting energy hypothesis (AWE cosmology), in which dark energy in the observational (ordinary matter) frame results from the violation of the weak equivalence principle by pressureless matter. It is found that there exist closed loops in the statefinder plane, which is an interesting characteristic of the evolution trajectories of statefinder parameters and can be used to distinguish AWE cosmology from other cosmological models

  8. Milton Friedman and the Emergence of the Permanent Income Hypothesis

    OpenAIRE

    Hsiang-Ke Chao

    2001-01-01

    The purpose of this paper is to investigate the evolution of MiltonFriedman's permanent income hypothesis from the 1940s to 1960s, andhow it became the paradigm of modern consumption theory. Modellingunobservables, such as permanent income and permanent consumption, isa long-standing issue in economics and econometrics. While theconventional approach has been to set an empirical model to make"permanent income" measurable, the historical change in the meaningof that theoretical construct is al...

  9. Aquagenic keratoderma. Two new case reports and a new hypothesis

    Directory of Open Access Journals (Sweden)

    Georgi Tchernev

    2014-01-01

    Full Text Available Aquagenic keratoderma has been described as a transient condition affecting predominantly young females and defined clinically by the appearance of palmar hyper-wrinkling accentuated after immersion in water. We present two new cases with aquagenic palmoplantar acrokeratoderma - a child and a young male. A significant clinical improvement was achieved after topical treatment with aluminum salts. Aquagenic palmar keratoderma may be a clue to cystic fibrosis in adolescents and young adults. We developed a new hypothesis on its pathogenesis.

  10. TESTING THE EFFICIENT MARKET HYPOTHESIS ON THE ROMANIAN CAPITAL MARKET

    OpenAIRE

    Daniel Stefan ARMEANU; Sorin-Iulian CIOACA

    2014-01-01

    The Efficient Market Hypothesis (EMH) is one of the leading financial concepts that dominated the economic research over the last 50 years, being one of the pillars of the modern economic science. This theory, developed by Eugene Fama in the `70s, was a landmark in the development of theoretical concepts and models trying to explain the price evolution of financial assets (considering the common assumptions of the main developed theories) and also for the development of some branches in the f...

  11. Visual perception and imagery: a new molecular hypothesis.

    Science.gov (United States)

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated

  12. TECHNICAL ANALYSIS OF EFFICIENT MARKET HYPOTHESIS IN A FRONTIER MARKET

    OpenAIRE

    MOBEEN Ur Rehman; WAQAS Bin Khidmat

    2013-01-01

    This paper focuses on identifying the major financial indicators or ratios that play a crucial role in determining the prices of the securities. Also the volatility of the prices of securities on the basis of previous performance of the companies will help us to understand the applicability of efficient market hypothesis in our emerging financial market. The scope of this paper is to investigate the weak form of market efficiency in the Karachi stock exchange. This paper will help the investo...

  13. Mirror neurons, birdsong and human language: a hypothesis

    OpenAIRE

    Florence eLevy

    2012-01-01

    AbstractThe Mirror System Hypothesis (MSH) and investigations of birdsong are reviewed in relation to the significance for the development of human symbolic and language capacity, in terms of three fundamental forms of cognitive reference: iconic, indexical, and symbolic. Mirror systems are initially iconic but can progress to indexal reference when produced without the need for concurrent stimuli. Developmental stages in birdsong are also explored with reference to juvenile subsong vs comple...

  14. Yawning, fatigue and cortisol: expanding the Thompson Cortisol Hypothesis.

    OpenAIRE

    Thompson, Simon

    2014-01-01

    Yawning and its involvement in neurological disorders has become the new scientific conundrum. Cortisol levels are known to rise during stress and fatigue; yawning may occur when we are under stress or tired. However, the link between yawning, fatigue, and cortisol has not been fully understood. Expansion of the Thompson Cortisol Hypothesis proposes that the stress hormone, cortisol, is responsible for yawning and fatigue especially in people with incomplete innervation such as multiple sclero...

  15. Strongly trapped points and the cosmic censorship hypothesis

    International Nuclear Information System (INIS)

    Krolak, A.

    1987-01-01

    It is shown that singularities predicted by one of the theorems of Hawking cannot be naked. This result supports the validity of the cosmic censorship hypothesis put forward by Penrose. The condition that only singularities predicted by Hawking's singularity theorem occur in space-time is shown to be related to the condition that all singularities in space-time should be of Tipler's strong-curvature type

  16. Why Does REM Sleep Occur? A Wake-up Hypothesis

    OpenAIRE

    Dr. W. R. eKlemm

    2011-01-01

    Brain activity differs in the various sleep stages and in conscious wakefulness. Awakening from sleep requires restoration of the complex nerve impulse patterns in neuronal network assemblies necessary to re-create and sustain conscious wakefulness. Herein I propose that the brain uses REM to help wake itself up after it has had a sufficient amount of sleep. Evidence suggesting this hypothesis includes the facts that, 1) when first going to sleep, the brain plunges into Stage N3 (formerly ca...

  17. Biological fingerprint of high LET radiation. Brenner hypothesis

    International Nuclear Information System (INIS)

    Kodama, Yoshiaki; Awa, Akio; Nakamura, Nori

    1997-01-01

    Hypothesis by Brenner et al. (1994) that in chromosome aberrations in human peripheral lymphocytes induced by radiation exposure, F value (dicentrics/rings) differs dependently on the LET and can be a biomarker of high LET radiation like neutron and α-ray was reviewed and evaluated as follows. Radiation and chromosome aberrations; in this section, unstable aberrations like dicentric and rings (r) and stable ones like translocation and pericentric inversions were described. F value. Brenner hypothesis. Bauchinger's refutation. F value determined by FISH method; here, FISH is fluorescence in situ hybridization. F value in studies by author's Radiation Effect Research Facility. Frequency of chromosome aberration in A-bomb survivors and ESR (ESR: electron spin resonance). The cause for fluctuation of F values. The Brenner hypothesis could not be supported by studies by author's facility, suggesting that the rate of inter-chromosomal and intra-chromosomal exchange abnormalities can not be distinguishable by the radiation LET. This might be derived from the difference in detection technology of r rather than in LET. (K.H.)

  18. A test of the substitution-habitat hypothesis in amphibians.

    Science.gov (United States)

    Martínez-Abraín, Alejandro; Galán, Pedro

    2017-12-08

    Most examples that support the substitution-habitat hypothesis (human-made habitats act as substitutes of original habitat) deal with birds and mammals. We tested this hypothesis in 14 amphibians by using percentage occupancy as a proxy of habitat quality (i.e., higher occupancy percentages indicate higher quality). We classified water body types as original habitat (no or little human influence) depending on anatomical, behavioral, or physiological adaptations of each amphibian species. Ten species had relatively high probabilities (0.16-0.28) of occurrence in original habitat, moderate probability of occurrence in substitution habitats (0.11-0.14), and low probability of occurrence in refuge habitats (0.05-0.08). Thus, the substitution-habitat hypothesis only partially applies to amphibians because the low occupancy of refuges could be due to the negligible human persecution of this group (indicating good conservation status). However, low occupancy of refuges could also be due to low tolerance of refuge conditions, which could have led to selective extinction or colonization problems due to poor dispersal capabilities. That original habitats had the highest probabilities of occupancy suggests amphibians have a good conservation status in the region. They also appeared highly adaptable to anthropogenic substitution habitats. © 2017 Society for Conservation Biology.

  19. Does Portuguese economy support crude oil conservation hypothesis?

    International Nuclear Information System (INIS)

    Bashiri Behmiri, Niaz; Pires Manso, José R.

    2012-01-01

    This paper examines cointegration relationships and Granger causality nexus in a trivariate framework among oil consumption, economic growth and international oil price in Portugal. For this purpose, we employ two Granger causality approaches: the Johansen cointegration test and vector error correction model (VECM) and the Toda–Yamamoto approaches. Cointegration test proves the existence of a long run equilibrium relationship among these variables and VECM and Toda–Yamamoto Granger causality tests indicate that there is bidirectional causality between crude oil consumption and economic growth (feed back hypothesis). Therefore, the Portuguese economy does not support crude oil conservation hypothesis. Consequently, policymakers should consider that implementing oil conservation and environmental policies may negatively impact on the Portuguese economic growth. - Highlights: ► We examine Granger causality among oil consumption, GDP and oil price in Portugal. ► VECM and Toda–Yamamoto tests found bidirectional causality among oil and GDP. ► Portuguese economy does not support the crude oil conservation hypothesis.

  20. A test of the reward-contrast hypothesis.

    Science.gov (United States)

    Dalecki, Stefan J; Panoz-Brown, Danielle E; Crystal, Jonathon D

    2017-12-01

    Source memory, a facet of episodic memory, is the memory of the origin of information. Whereas source memory in rats is sustained for at least a week, spatial memory degraded after approximately a day. Different forgetting functions may suggest that two memory systems (source memory and spatial memory) are dissociated. However, in previous work, the two tasks used baiting conditions consisting of chocolate and chow flavors; notably, the source memory task used the relatively better flavor. Thus, according to the reward-contrast hypothesis, when chocolate and chow were presented within the same context (i.e., within a single radial maze trial), the chocolate location was more memorable than the chow location because of contrast. We tested the reward-contrast hypothesis using baiting configurations designed to produce reward-contrast. The reward-contrast hypothesis predicts that under these conditions, spatial memory will survive a 24-h retention interval. We documented elimination of spatial memory performance after a 24-h retention interval using a reward-contrast baiting pattern. These data suggest that reward contrast does not explain our earlier findings that source memory survives unusually long retention intervals. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Vocabulary Acquisition and Task Effectiveness in Involvement Load Hypothesis: A case in Iran

    Directory of Open Access Journals (Sweden)

    Hassan Soleimani

    2015-09-01

    Full Text Available Involvement load hypothesis as a cognitive construct states that tasks with higher involvements yield better results in vocabulary retention. This comparison group designed study examined the immediate and delayed effects of tasks with different involvements in involvement load hypothesis (Laufer & Hulstijn, 2001. Applying a version of Nelson Proficiency Test as a homogenizing exclusion criterion, 33 low proficiency Iranian EFL learners were randomly assigned to three experimental groups: blank-filling, sentence making, and reading comprehension. The results of ANOVA and Kruskal-Wallis tests supported task-induced involvement in immediate posttest since the sentence making task (M=5.72 yielded better results in comparison with the other two blank-filling (M=5.45 and reading comprehension (M=3.18 tasks. Nevertheless, sentence making and blank-filling tasks of which the involvements were somehow similar did not yield significant superiority to each other. It is inferred that tasks with nearer involvements yield somehow similar results in vocabulary acquisition.

  2. Evidence against the energetic cost hypothesis for the short introns in highly expressed genes

    Directory of Open Access Journals (Sweden)

    Niu Deng-Ke

    2008-05-01

    Full Text Available Abstract Background In animals, the moss Physcomitrella patens and the pollen of Arabidopsis thaliana, highly expressed genes have shorter introns than weakly expressed genes. A popular explanation for this is selection for transcription efficiency, which includes two sub-hypotheses: to minimize the energetic cost or to minimize the time cost. Results In an individual human, different organs may differ up to hundreds of times in cell number (for example, a liver versus a hypothalamus. Considered at the individual level, a gene specifically expressed in a large organ is actually transcribed tens or hundreds of times more than a gene with a similar expression level (a measure of mRNA abundance per cell specifically expressed in a small organ. According to the energetic cost hypothesis, the former should have shorter introns than the latter. However, in humans and mice we have not found significant differences in intron length between large-tissue/organ-specific genes and small-tissue/organ-specific genes with similar expression levels. Qualitative estimation shows that the deleterious effect (that is, the energetic burden of long introns in highly expressed genes is too negligible to be efficiently selected against in mammals. Conclusion The short introns in highly expressed genes should not be attributed to energy constraint. We evaluated evidence for the time cost hypothesis and other alternatives.

  3. Social phobia and avoidant personality disorder: similar but different?

    Science.gov (United States)

    Lampe, Lisa; Sunderland, Matthew

    2015-02-01

    Avoidant personality disorder (AvPD) is regarded as a severe variant of social phobia (SP), consistent with a dimensional model. However, these conclusions are largely drawn from studies based on individuals with SP, with or without comorbid AvPD. The present study hypothesized that there are qualitative differences between AvPD and SP that are undermined by limiting research to participants with SP. The authors sought to test this hypothesis by comparing three groups-SP only, AvPD only, and SP+AvPD-using data extracted from an epidemiological sample of 10,641 adults aged 18 years and over. Screening questions were used in the epidemiological survey to identify ICD-10 personality disorders; from this the author developed a proxy measure for DSM-IV AvPD. Axis I diagnoses, including DSM-IV SP, were identified using the Composite International Diagnostic Interview (CIDI). In this sample, the majority of those with AvPD did not also have SP: The authors found 116 persons with AvPD only, 196 with SP only, and 69 with SP+AvPD. There was little difference between any of the groups on sex, marital status, employment, education, or impairment variables. The SP+AvPD group reported more distress and comorbidity than the SP only and AvPD only groups, which did not differentiate from each other. More feared social situations were endorsed in the SP only group compared to the AvPD only group. Although the finding of few differences between SP only and AvPD only groups among the variables measured in this epidemiological survey fails to provide support for the hypothesis of qualitative differences, the finding that the AvPD only group appears more similar to the SP only group than to the SP+AvPD group also fails to provide support for the alternative continuity hypothesis. The greater distress and additional comorbidity with depression associated with SP+AvPD may be due to the additional symptom load of a second disorder rather than simply representing a more severe variant of

  4. Identifying mechanistic similarities in drug responses

    KAUST Repository

    Zhao, C.

    2012-05-15

    Motivation: In early drug development, it would be beneficial to be able to identify those dynamic patterns of gene response that indicate that drugs targeting a particular gene will be likely or not to elicit the desired response. One approach would be to quantitate the degree of similarity between the responses that cells show when exposed to drugs, so that consistencies in the regulation of cellular response processes that produce success or failure can be more readily identified.Results: We track drug response using fluorescent proteins as transcription activity reporters. Our basic assumption is that drugs inducing very similar alteration in transcriptional regulation will produce similar temporal trajectories on many of the reporter proteins and hence be identified as having similarities in their mechanisms of action (MOA). The main body of this work is devoted to characterizing similarity in temporal trajectories/signals. To do so, we must first identify the key points that determine mechanistic similarity between two drug responses. Directly comparing points on the two signals is unrealistic, as it cannot handle delays and speed variations on the time axis. Hence, to capture the similarities between reporter responses, we develop an alignment algorithm that is robust to noise, time delays and is able to find all the contiguous parts of signals centered about a core alignment (reflecting a core mechanism in drug response). Applying the proposed algorithm to a range of real drug experiments shows that the result agrees well with the prior drug MOA knowledge. © The Author 2012. Published by Oxford University Press. All rights reserved.

  5. Semantic similarity between ontologies at different scales

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qingpeng; Haglin, David J.

    2016-04-01

    In the past decade, existing and new knowledge and datasets has been encoded in different ontologies for semantic web and biomedical research. The size of ontologies is often very large in terms of number of concepts and relationships, which makes the analysis of ontologies and the represented knowledge graph computational and time consuming. As the ontologies of various semantic web and biomedical applications usually show explicit hierarchical structures, it is interesting to explore the trade-offs between ontological scales and preservation/precision of results when we analyze ontologies. This paper presents the first effort of examining the capability of this idea via studying the relationship between scaling biomedical ontologies at different levels and the semantic similarity values. We evaluate the semantic similarity between three Gene Ontology slims (Plant, Yeast, and Candida, among which the latter two belong to the same kingdom—Fungi) using four popular measures commonly applied to biomedical ontologies (Resnik, Lin, Jiang-Conrath, and SimRel). The results of this study demonstrate that with proper selection of scaling levels and similarity measures, we can significantly reduce the size of ontologies without losing substantial detail. In particular, the performance of Jiang-Conrath and Lin are more reliable and stable than that of the other two in this experiment, as proven by (a) consistently showing that Yeast and Candida are more similar (as compared to Plant) at different scales, and (b) small deviations of the similarity values after excluding a majority of nodes from several lower scales. This study provides a deeper understanding of the application of semantic similarity to biomedical ontologies, and shed light on how to choose appropriate semantic similarity measures for biomedical engineering.

  6. The Chinese-born immigrant infant feeding and growth hypothesis

    Directory of Open Access Journals (Sweden)

    Kristy A. Bolton

    2016-10-01

    Full Text Available Abstract Background Rapid growth in the first six months of life is a well-established risk factor for childhood obesity, and child feeding practices (supplementation or substitution of breast milk with formula and early introduction of solids have been reported to predict this. The third largest immigrant group in Australia originate from China. Case-studies reported from Victorian Maternal and Child Health nurses suggest that rapid growth trajectories in the infants of Chinese parents is common place. Furthermore, these nurses report that high value is placed by this client group on rapid growth and a fatter child; that rates of breastfeeding are low and overfeeding of infant formula is high. There are currently no studies which describe infant growth or its correlates among this immigrant group. Presentation of hypothesis We postulate that in Australia, Chinese-born immigrant mothers will have different infant feeding practices compared to non-immigrant mothers and this will result in different growth trajectories and risk of overweight. We present the Chinese-born immigrant infant feeding and growth hypothesis - that less breastfeeding, high formula feeding and early introduction of solids in infants of Chinese-born immigrant mothers living in Australia will result in a high protein intake and subsequent rapid growth trajectory and increased risk of overweight and obesity. Testing the hypothesis Three related studies will be conducted to investigate the hypothesis. These will include two quantitative studies (one cross-sectional, one longitudinal and a qualitative study. The quantitative studies will investigate differences in feeding practices in Chinese-born immigrant compared to non-immigrant mothers and infants; and the growth trajectories over the first 3.5 years of life. The qualitative study will provide more in-depth understanding of the influencing factors on feeding practices in Chinese-born immigrant mothers. Implications of the

  7. Review series on helminths, immune modulation and the hygiene hypothesis: the broader implications of the hygiene hypothesis.

    Science.gov (United States)

    Rook, Graham A W

    2009-01-01

    Man has moved rapidly from the hunter-gatherer environment to the living conditions of the rich industrialized countries. The hygiene hypothesis suggests that the resulting changed and reduced pattern of exposure to microorganisms has led to disordered regulation of the immune system, and hence to increases in certain inflammatory disorders. The concept began with the allergic disorders, but there are now good reasons for extending it to autoimmunity, inflammatory bowel disease, neuroinflammatory disorders, atherosclerosis, depression associated with raised inflammatory cytokines, and some cancers. This review discusses these possibilities in the context of Darwinian medicine, which uses knowledge of evolution to cast light on human diseases. The Darwinian approach enables one to correctly identify some of the organisms that are important for the 'Hygiene' or 'Old Friends' hypothesis, and to point to the potential exploitation of these organisms or their components in novel types of prophylaxis with applications in several branches of medicine.

  8. Measure of Node Similarity in Multilayer Networks

    DEFF Research Database (Denmark)

    Møllgaard, Anders; Zettler, Ingo; Dammeyer, Jesper

    2016-01-01

    The weight of links in a network is often related to the similarity of thenodes. Here, we introduce a simple tunable measure for analysing the similarityof nodes across different link weights. In particular, we use the measure toanalyze homophily in a group of 659 freshman students at a large...... university.Our analysis is based on data obtained using smartphones equipped with customdata collection software, complemented by questionnaire-based data. The networkof social contacts is represented as a weighted multilayer network constructedfrom different channels of telecommunication as well as data...... might bepresent in one layer of the multilayer network and simultaneously be absent inthe other layers. For a variable such as gender, our measure reveals atransition from similarity between nodes connected with links of relatively lowweight to dis-similarity for the nodes connected by the strongest...

  9. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  10. Universal self-similarity of propagating populations.

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  11. Universal self-similarity of propagating populations

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  12. Trajectory similarity join in spatial networks

    KAUST Repository

    Shang, Shuo; Chen, Lisi; Wei, Zhewei; Jensen, Christian S.; Zheng, Kai; Kalnis, Panos

    2017-01-01

    With these applications in mind, we provide a purposeful definition of similarity. To enable efficient TS-Join processing on large sets of trajectories, we develop search space pruning techniques and take into account the parallel processing capabilities of modern processors. Specifically, we present a two-phase divide-and-conquer algorithm. For each trajectory, the algorithm first finds similar trajectories. Then it merges the results to achieve a final result. The algorithm exploits an upper bound on the spatiotemporal similarity and a heuristic scheduling strategy for search space pruning. The algorithm's per-trajectory searches are independent of each other and can be performed in parallel, and the merging has constant cost. An empirical study with real data offers insight in the performance of the algorithm and demonstrates that is capable of outperforming a well-designed baseline algorithm by an order of magnitude.

  13. Phonological similarity in working memory span tasks.

    Science.gov (United States)

    Chow, Michael; Macnamara, Brooke N; Conway, Andrew R A

    2016-08-01

    In a series of four experiments, we explored what conditions are sufficient to produce a phonological similarity facilitation effect in working memory span tasks. By using the same set of memoranda, but differing the secondary-task requirements across experiments, we showed that a phonological similarity facilitation effect is dependent upon the semantic relationship between the memoranda and the secondary-task stimuli, and is robust to changes in the representation, ordering, and pool size of the secondary-task stimuli. These findings are consistent with interference accounts of memory (Brown, Neath, & Chater, Psychological Review, 114, 539-576, 2007; Oberauer, Lewandowsky, Farrell, Jarrold, & Greaves, Psychonomic Bulletin & Review, 19, 779-819, 2012), whereby rhyming stimuli provide a form of categorical similarity that allows distractors to be excluded from retrieval at recall.

  14. Unveiling Music Structure Via PLSA Similarity Fusion

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Meng, Anders; Petersen, Kaare Brandt

    2007-01-01

    Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious. In this p......Nowadays there is an increasing interest in developing methods for building music recommendation systems. In order to get a satisfactory performance from such a system, one needs to incorporate as much information about songs similarity as possible; however, how to do so is not obvious...... observed similarities can be satisfactorily explained using the latent semantics. Additionally, this approach significantly simplifies the song retrieval phase, leading to a more practical system implementation. The suitability of the PLSA model for representing music structure is studied in a simplified...

  15. Large margin classification with indefinite similarities

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2016-01-07

    Classification with indefinite similarities has attracted attention in the machine learning community. This is partly due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite, i.e. the Mercer condition is not satisfied, or the Mercer condition is difficult to verify. Examples of such indefinite similarities in machine learning applications are ample including, for instance, the BLAST similarity score between protein sequences, human-judged similarities between concepts and words, and the tangent distance or the shape matching distance in computer vision. Nevertheless, previous works on classification with indefinite similarities are not fully satisfactory. They have either introduced sources of inconsistency in handling past and future examples using kernel approximation, settled for local-minimum solutions using non-convex optimization, or produced non-sparse solutions by learning in Krein spaces. Despite the large volume of research devoted to this subject lately, we demonstrate in this paper how an old idea, namely the 1-norm support vector machine (SVM) proposed more than 15 years ago, has several advantages over more recent work. In particular, the 1-norm SVM method is conceptually simpler, which makes it easier to implement and maintain. It is competitive, if not superior to, all other methods in terms of predictive accuracy. Moreover, it produces solutions that are often sparser than more recent methods by several orders of magnitude. In addition, we provide various theoretical justifications by relating 1-norm SVM to well-established learning algorithms such as neural networks, SVM, and nearest neighbor classifiers. Finally, we conduct a thorough experimental evaluation, which reveals that the evidence in favor of 1-norm SVM is statistically significant.

  16. Eddy diffusion coefficients and their upper limits based on application of the similarity theory

    Directory of Open Access Journals (Sweden)

    M. N. Vlasov

    2015-07-01

    Full Text Available The equation for the diffusion velocity in the mesosphere and the lower thermosphere (MLT includes the terms for molecular and eddy diffusion. These terms are very similar. For the first time, we show that, by using the similarity theory, the same formula can be obtained for the eddy diffusion coefficient as the commonly used formula derived by Weinstock (1981. The latter was obtained by taking, as a basis, the integral function for diffusion derived by Taylor (1921 and the three-dimensional Kolmogorov kinetic energy spectrum. The exact identity of both formulas means that the eddy diffusion and heat transport coefficients used in the equations, both for diffusion and thermal conductivity, must meet a criterion that restricts the outer eddy scale to being much less than the scale height of the atmosphere. This requirement is the same as the requirement that the free path of molecules must be much smaller than the scale height of the atmosphere. A further result of this criterion is that the eddy diffusion coefficients Ked, inferred from measurements of energy dissipation rates, cannot exceed the maximum value of 3.2 × 106 cm2 s−1 for the maximum value of the energy dissipation rate of 2 W kg−1 measured in the mesosphere and the lower thermosphere (MLT. This means that eddy diffusion coefficients larger than the maximum value correspond to eddies with outer scales so large that it is impossible to use these coefficients in eddy diffusion and eddy heat transport equations. The application of this criterion to the different experimental data shows that some reported eddy diffusion coefficients do not meet this criterion. For example, the large values of these coefficients (1 × 107 cm2 s−1 estimated in the Turbulent Oxygen Mixing Experiment (TOMEX do not correspond to this criterion. The Ked values inferred at high latitudes by Lübken (1997 meet this criterion for summer and winter polar data, but the Ked values for summer at low latitudes

  17. Similarity joins in relational database systems

    CERN Document Server

    Augsten, Nikolaus

    2013-01-01

    State-of-the-art database systems manage and process a variety of complex objects, including strings and trees. For such objects equality comparisons are often not meaningful and must be replaced by similarity comparisons. This book describes the concepts and techniques to incorporate similarity into database systems. We start out by discussing the properties of strings and trees, and identify the edit distance as the de facto standard for comparing complex objects. Since the edit distance is computationally expensive, token-based distances have been introduced to speed up edit distance comput

  18. Outsourced Similarity Search on Metric Data Assets

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Assent, Ira; Jensen, Christian S.

    2012-01-01

    . Outsourcing offers the data owner scalability and a low initial investment. The need for privacy may be due to the data being sensitive (e.g., in medicine), valuable (e.g., in astronomy), or otherwise confidential. Given this setting, the paper presents techniques that transform the data prior to supplying......This paper considers a cloud computing setting in which similarity querying of metric data is outsourced to a service provider. The data is to be revealed only to trusted users, not to the service provider or anyone else. Users query the server for the most similar data objects to a query example...

  19. Measure of Node Similarity in Multilayer Networks

    DEFF Research Database (Denmark)

    Møllgaard, Anders; Zettler, Ingo; Dammeyer, Jesper

    2016-01-01

    university.Our analysis is based on data obtained using smartphones equipped with customdata collection software, complemented by questionnaire-based data. The networkof social contacts is represented as a weighted multilayer network constructedfrom different channels of telecommunication as well as data...... might bepresent in one layer of the multilayer network and simultaneously be absent inthe other layers. For a variable such as gender, our measure reveals atransition from similarity between nodes connected with links of relatively lowweight to dis-similarity for the nodes connected by the strongest...

  20. Cultural similarity and adjustment of expatriate academics

    DEFF Research Database (Denmark)

    Selmer, Jan; Lauring, Jakob

    2009-01-01

    The findings of a number of recent empirical studies of business expatriates, using different samples and methodologies, seem to support the counter-intuitive proposition that cultural similarity may be as difficult to adjust to as cultural dissimilarity. However, it is not obvious...... and non-EU countries. Results showed that although the perceived cultural similarity between host and home country for the two groups of investigated respondents was different, there was neither any difference in their adjustment nor in the time it took for them to become proficient. Implications...