WorldWideScience

Sample records for hypothesis testing methods

  1. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  2. [Dilemma of null hypothesis in ecological hypothesis's experiment test.

    Science.gov (United States)

    Li, Ji

    2016-06-01

    Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.

  3. Hypothesis Designs for Three-Hypothesis Test Problems

    OpenAIRE

    Yan Li; Xiaolong Pu

    2010-01-01

    As a helpful guide for applications, the alternative hypotheses of the three-hypothesis test problems are designed under the required error probabilities and average sample number in this paper. The asymptotic formulas and the proposed numerical quadrature formulas are adopted, respectively, to obtain the hypothesis designs and the corresponding sequential test schemes under the Koopman-Darmois distributions. The example of the normal mean test shows that our methods are qu...

  4. A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING

    OpenAIRE

    Temel, Tugrul T.

    2001-01-01

    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.

  5. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA

  6. Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Cowell, Andrew J [Kennewick, WA; Gregory, Michelle L [Richland, WA; Baddeley, Robert L [Richland, WA; Paulson, Patrick R [Pasco, WA; Tratz, Stephen C [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2012-03-20

    Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture are described according to some aspects. In one aspect, a hypothesis analysis method includes providing a hypothesis, providing an indicator which at least one of supports and refutes the hypothesis, using the indicator, associating evidence with the hypothesis, weighting the association of the evidence with the hypothesis, and using the weighting, providing information regarding the accuracy of the hypothesis.

  7. Debates—Hypothesis testing in hydrology: Introduction

    Science.gov (United States)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  8. Testing the null hypothesis: the forgotten legacy of Karl Popper?

    Science.gov (United States)

    Wilkinson, Mick

    2013-01-01

    Testing of the null hypothesis is a fundamental aspect of the scientific method and has its basis in the falsification theory of Karl Popper. Null hypothesis testing makes use of deductive reasoning to ensure that the truth of conclusions is irrefutable. In contrast, attempting to demonstrate the new facts on the basis of testing the experimental or research hypothesis makes use of inductive reasoning and is prone to the problem of the Uniformity of Nature assumption described by David Hume in the eighteenth century. Despite this issue and the well documented solution provided by Popper's falsification theory, the majority of publications are still written such that they suggest the research hypothesis is being tested. This is contrary to accepted scientific convention and possibly highlights a poor understanding of the application of conventional significance-based data analysis approaches. Our work should remain driven by conjecture and attempted falsification such that it is always the null hypothesis that is tested. The write up of our studies should make it clear that we are indeed testing the null hypothesis and conforming to the established and accepted philosophical conventions of the scientific method.

  9. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  10. The discovered preference hypothesis - an empirical test

    DEFF Research Database (Denmark)

    Lundhede, Thomas; Ladenburg, Jacob; Olsen, Søren Bøye

    Using stated preference methods for valuation of non-market goods is known to be vulnerable to a range of biases. Some authors claim that these so-called anomalies in effect render the methods useless for the purpose. However, the Discovered Preference Hypothesis, as put forth by Plott [31], offers...... an nterpretation and explanation of biases which entails that the stated preference methods need not to be completely written off. In this paper we conduct a test for the validity and relevance of the DPH interpretation of biases. In a choice experiment concerning preferences for protection of Danish nature areas...... as respondents evaluate more and more choice sets. This finding supports the Discovered Preference Hypothesis interpretation and explanation of starting point bias....

  11. Prospective detection of large prediction errors: a hypothesis testing approach

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss

  12. Some consequences of using the Horsfall-Barratt scale for hypothesis testing

    Science.gov (United States)

    Comparing treatment effects by hypothesis testing is a common practice in plant pathology. Nearest percent estimates (NPEs) of disease severity were compared to Horsfall-Barratt (H-B) scale data to explore whether there was an effect of assessment method on hypothesis testing. A simulation model ba...

  13. An omnibus test for the global null hypothesis.

    Science.gov (United States)

    Futschik, Andreas; Taus, Thomas; Zehetmayer, Sonja

    2018-01-01

    Global hypothesis tests are a useful tool in the context of clinical trials, genetic studies, or meta-analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combination tests have been recommended to maximize power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g. Bonferroni or Simes test). However, usually there is no a priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on cumulative sums of the transformed p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in an R-package called omnibus.

  14. A default Bayesian hypothesis test for mediation.

    Science.gov (United States)

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  15. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    Science.gov (United States)

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate

  16. Hypothesis Testing in the Real World

    Science.gov (United States)

    Miller, Jeff

    2017-01-01

    Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis…

  17. Concerns regarding a call for pluralism of information theory and hypothesis testing

    Science.gov (United States)

    Lukacs, P.M.; Thompson, W.L.; Kendall, W.L.; Gould, W.R.; Doherty, P.F.; Burnham, K.P.; Anderson, D.R.

    2007-01-01

    1. Stephens et al . (2005) argue for `pluralism? in statistical analysis, combining null hypothesis testing and information-theoretic (I-T) methods. We show that I-T methods are more informative even in single variable problems and we provide an ecological example. 2. I-T methods allow inferences to be made from multiple models simultaneously. We believe multimodel inference is the future of data analysis, which cannot be achieved with null hypothesis-testing approaches. 3. We argue for a stronger emphasis on critical thinking in science in general and less reliance on exploratory data analysis and data dredging. Deriving alternative hypotheses is central to science; deriving a single interesting science hypothesis and then comparing it to a default null hypothesis (e.g. `no difference?) is not an efficient strategy for gaining knowledge. We think this single-hypothesis strategy has been relied upon too often in the past. 4. We clarify misconceptions presented by Stephens et al . (2005). 5. We think inference should be made about models, directly linked to scientific hypotheses, and their parameters conditioned on data, Prob(Hj| data). I-T methods provide a basis for this inference. Null hypothesis testing merely provides a probability statement about the data conditioned on a null model, Prob(data |H0). 6. Synthesis and applications. I-T methods provide a more informative approach to inference. I-T methods provide a direct measure of evidence for or against hypotheses and a means to consider simultaneously multiple hypotheses as a basis for rigorous inference. Progress in our science can be accelerated if modern methods can be used intelligently; this includes various I-T and Bayesian methods.

  18. Bayesian Hypothesis Testing

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H0.

  19. Hypothesis Testing as an Act of Rationality

    Science.gov (United States)

    Nearing, Grey

    2017-04-01

    Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.

  20. Approaches to informed consent for hypothesis-testing and hypothesis-generating clinical genomics research.

    Science.gov (United States)

    Facio, Flavia M; Sapp, Julie C; Linn, Amy; Biesecker, Leslie G

    2012-10-10

    Massively-parallel sequencing (MPS) technologies create challenges for informed consent of research participants given the enormous scale of the data and the wide range of potential results. We propose that the consent process in these studies be based on whether they use MPS to test a hypothesis or to generate hypotheses. To demonstrate the differences in these approaches to informed consent, we describe the consent processes for two MPS studies. The purpose of our hypothesis-testing study is to elucidate the etiology of rare phenotypes using MPS. The purpose of our hypothesis-generating study is to test the feasibility of using MPS to generate clinical hypotheses, and to approach the return of results as an experimental manipulation. Issues to consider in both designs include: volume and nature of the potential results, primary versus secondary results, return of individual results, duty to warn, length of interaction, target population, and privacy and confidentiality. The categorization of MPS studies as hypothesis-testing versus hypothesis-generating can help to clarify the issue of so-called incidental or secondary results for the consent process, and aid the communication of the research goals to study participants.

  1. A Bayesian Decision-Theoretic Approach to Logically-Consistent Hypothesis Testing

    Directory of Open Access Journals (Sweden)

    Gustavo Miranda da Silva

    2015-09-01

    Full Text Available This work addresses an important issue regarding the performance of simultaneous test procedures: the construction of multiple tests that at the same time are optimal from a statistical perspective and that also yield logically-consistent results that are easy to communicate to practitioners of statistical methods. For instance, if hypothesis A implies hypothesis B, is it possible to create optimal testing procedures that reject A whenever they reject B? Unfortunately, several standard testing procedures fail in having such logical consistency. Although this has been deeply investigated under a frequentist perspective, the literature lacks analyses under a Bayesian paradigm. In this work, we contribute to the discussion by investigating three rational relationships under a Bayesian decision-theoretic standpoint: coherence, invertibility and union consonance. We characterize and illustrate through simple examples optimal Bayes tests that fulfill each of these requisites separately. We also explore how far one can go by putting these requirements together. We show that although fairly intuitive tests satisfy both coherence and invertibility, no Bayesian testing scheme meets the desiderata as a whole, strengthening the understanding that logical consistency cannot be combined with statistical optimality in general. Finally, we associate Bayesian hypothesis testing with Bayes point estimation procedures. We prove the performance of logically-consistent hypothesis testing by means of a Bayes point estimator to be optimal only under very restrictive conditions.

  2. A novel hypothesis splitting method implementation for multi-hypothesis filters

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Andersen, Nils Axel

    2013-01-01

    The paper presents a multi-hypothesis filter library featuring a novel method for splitting Gaussians into ones with smaller variances. The library is written in C++ for high performance and the source code is open and free1. The multi-hypothesis filters commonly approximate the distribution tran...

  3. A critique of statistical hypothesis testing in clinical research

    Directory of Open Access Journals (Sweden)

    Somik Raha

    2011-01-01

    Full Text Available Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined.

  4. Bayesian Hypothesis Testing for Psychologists: A Tutorial on the Savage-Dickey Method

    Science.gov (United States)

    Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul

    2010-01-01

    In the field of cognitive psychology, the "p"-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the "p"-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is…

  5. Feasibility study using hypothesis testing to demonstrate containment of radionuclides within waste packages

    International Nuclear Information System (INIS)

    Thomas, R.E.

    1986-04-01

    The purpose of this report is to apply methods of statistical hypothesis testing to demonstrate the performance of containers of radioactive waste. The approach involves modeling the failure times of waste containers using Weibull distributions, making strong assumptions about the parameters. A specific objective is to apply methods of statistical hypothesis testing to determine the number of container tests that must be performed in order to control the probability of arriving at the wrong conclusions. An algorithm to determine the required number of containers to be tested with the acceptable number of failures is derived as a function of the distribution parameters, stated probabilities, and the desired waste containment life. Using a set of reference values for the input parameters, sample sizes of containers to be tested are calculated for demonstration purposes. These sample sizes are found to be excessively large, indicating that this hypothesis-testing framework does not provide a feasible approach for demonstrating satisfactory performance of waste packages for exceptionally long time periods

  6. Hypothesis tests for the detection of constant speed radiation moving sources

    Energy Technology Data Exchange (ETDEWEB)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Sannie, Guillaume; Gameiro, Jordan; Normand, Stephane [CEA, LIST, Laboratoire Capteurs Architectures Electroniques, 99 Gif-sur-Yvette, (France); Mechin, Laurence [CNRS, UCBN, Groupe de Recherche en Informatique, Image, Automatique et Instrumentation de Caen, 4050 Caen, (France)

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)

  7. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  8. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  9. Biostatistics series module 2: Overview of hypothesis testing

    Directory of Open Access Journals (Sweden)

    Avijit Hazra

    2016-01-01

    Full Text Available Hypothesis testing (or statistical inference is one of the major applications of biostatistics. Much of medical research begins with a research question that can be framed as a hypothesis. Inferential statistics begins with a null hypothesis that reflects the conservative position of no change or no difference in comparison to baseline or between groups. Usually, the researcher has reason to believe that there is some effect or some difference which is the alternative hypothesis. The researcher therefore proceeds to study samples and measure outcomes in the hope of generating evidence strong enough for the statistician to be able to reject the null hypothesis. The concept of the P value is almost universally used in hypothesis testing. It denotes the probability of obtaining by chance a result at least as extreme as that observed, even when the null hypothesis is true and no real difference exists. Usually, if P is < 0.05 the null hypothesis is rejected and sample results are deemed statistically significant. With the increasing availability of computers and access to specialized statistical software, the drudgery involved in statistical calculations is now a thing of the past, once the learning curve of the software has been traversed. The life sciences researcher is therefore free to devote oneself to optimally designing the study, carefully selecting the hypothesis tests to be applied, and taking care in conducting the study well. Unfortunately, selecting the right test seems difficult initially. Thinking of the research hypothesis as addressing one of five generic research questions helps in selection of the right hypothesis test. In addition, it is important to be clear about the nature of the variables (e.g., numerical vs. categorical; parametric vs. nonparametric and the number of groups or data sets being compared (e.g., two or more than two at a time. The same research question may be explored by more than one type of hypothesis test

  10. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  11. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  12. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    Science.gov (United States)

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  13. Personal Hypothesis Testing: The Role of Consistency and Self-Schema.

    Science.gov (United States)

    Strohmer, Douglas C.; And Others

    1988-01-01

    Studied how individuals test hypotheses about themselves. Examined extent to which Snyder's bias toward confirmation persists when negative or nonconsistent personal hypothesis is tested. Found negativity or positivity did not affect hypothesis testing directly, though hypothesis consistency did. Found cognitive schematic variable (vulnerability…

  14. Null but not void: considerations for hypothesis testing.

    Science.gov (United States)

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  15. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  16. Hypothesis testing of a change point during cognitive decline among Alzheimer's disease patients.

    Science.gov (United States)

    Ji, Ming; Xiong, Chengjie; Grundman, Michael

    2003-10-01

    In this paper, we present a statistical hypothesis test for detecting a change point over the course of cognitive decline among Alzheimer's disease patients. The model under the null hypothesis assumes a constant rate of cognitive decline over time and the model under the alternative hypothesis is a general bilinear model with an unknown change point. When the change point is unknown, however, the null distribution of the test statistics is not analytically tractable and has to be simulated by parametric bootstrap. When the alternative hypothesis that a change point exists is accepted, we propose an estimate of its location based on the Akaike's Information Criterion. We applied our method to a data set from the Neuropsychological Database Initiative by implementing our hypothesis testing method to analyze Mini Mental Status Exam scores based on a random-slope and random-intercept model with a bilinear fixed effect. Our result shows that despite large amount of missing data, accelerated decline did occur for MMSE among AD patients. Our finding supports the clinical belief of the existence of a change point during cognitive decline among AD patients and suggests the use of change point models for the longitudinal modeling of cognitive decline in AD research.

  17. Tests of the Giant Impact Hypothesis

    Science.gov (United States)

    Jones, J. H.

    1998-01-01

    The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.

  18. The potential for increased power from combining P-values testing the same hypothesis.

    Science.gov (United States)

    Ganju, Jitendra; Julie Ma, Guoguang

    2017-02-01

    The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.

  19. Testing competing forms of the Milankovitch hypothesis

    DEFF Research Database (Denmark)

    Kaufmann, Robert K.; Juselius, Katarina

    2016-01-01

    We test competing forms of the Milankovitch hypothesis by estimating the coefficients and diagnostic statistics for a cointegrated vector autoregressive model that includes 10 climate variables and four exogenous variables for solar insolation. The estimates are consistent with the physical...... ice volume and solar insolation. The estimated adjustment dynamics show that solar insolation affects an array of climate variables other than ice volume, each at a unique rate. This implies that previous efforts to test the strong form of the Milankovitch hypothesis by examining the relationship...... that the latter is consistent with a weak form of the Milankovitch hypothesis and that it should be restated as follows: Internal climate dynamics impose perturbations on glacial cycles that are driven by solar insolation. Our results show that these perturbations are likely caused by slow adjustment between land...

  20. A Review of Multiple Hypothesis Testing in Otolaryngology Literature

    Science.gov (United States)

    Kirkham, Erin M.; Weaver, Edward M.

    2018-01-01

    Objective Multiple hypothesis testing (or multiple testing) refers to testing more than one hypothesis within a single analysis, and can inflate the Type I error rate (false positives) within a study. The aim of this review was to quantify multiple testing in recent large clinical studies in the otolaryngology literature and to discuss strategies to address this potential problem. Data sources Original clinical research articles with >100 subjects published in 2012 in the four general otolaryngology journals with the highest Journal Citation Reports 5-year impact factors. Review methods Articles were reviewed to determine whether the authors tested greater than five hypotheses in at least one family of inferences. For the articles meeting this criterion for multiple testing, Type I error rates were calculated and statistical correction was applied to the reported results. Results Of the 195 original clinical research articles reviewed, 72% met the criterion for multiple testing. Within these studies, there was a mean 41% chance of a Type I error and, on average, 18% of significant results were likely to be false positives. After the Bonferroni correction was applied, only 57% of significant results reported within the articles remained significant. Conclusion Multiple testing is common in recent large clinical studies in otolaryngology and deserves closer attention from researchers, reviewers and editors. Strategies for adjusting for multiple testing are discussed. PMID:25111574

  1. A test of the orthographic recoding hypothesis

    Science.gov (United States)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  2. Sex ratios in the two Germanies: a test of the economic stress hypothesis.

    Science.gov (United States)

    Catalano, Ralph A

    2003-09-01

    Literature describing temporal variation in the secondary sex ratio among humans reports an association between population stressors and declines in the odds of male birth. Explanations of this phenomenon draw on reports that stressed females spontaneously abort male more than female fetuses, and that stressed males exhibit reduced sperm motility. This work has led to the argument that population stress induced by a declining economy reduces the human sex ratio. No direct test of this hypothesis appears in the literature. Here, a test is offered based on a comparison of the sex ratio in East and West Germany for the years 1946 to 1999. The theory suggests that the East German sex ratio should be lower in 1991, when East Germany's economy collapsed, than expected from its own history and from the sex ratio in West Germany. The hypothesis is tested using time-series modelling methods. The data support the hypothesis. The sex ratio in East Germany was at its lowest in 1991. This first direct test supports the hypothesis that economic decline reduces the human sex ratio.

  3. A shift from significance test to hypothesis test through power analysis in medical research.

    Science.gov (United States)

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  4. A large scale test of the gaming-enhancement hypothesis

    Directory of Open Access Journals (Sweden)

    Andrew K. Przybylski

    2016-11-01

    Full Text Available A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  5. A large scale test of the gaming-enhancement hypothesis.

    Science.gov (United States)

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  6. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  7. A shift from significance test to hypothesis test through power analysis in medical research

    Directory of Open Access Journals (Sweden)

    Singh Girish

    2006-01-01

    Full Text Available Medical research literature until recently, exhibited substantial dominance of the Fisher′s significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson′s hypothesis test considering both probability of type I and II error. Fisher′s approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson′s approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher′s significance test to Neyman-Pearson′s hypothesis test procedure.

  8. Testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form

    DEFF Research Database (Denmark)

    Péguin-Feissolle, Anne; Strikholm, Birgit; Teräsvirta, Timo

    In this paper we propose a general method for testing the Granger noncausality hypothesis in stationary nonlinear models of unknown functional form. These tests are based on a Taylor expansion of the nonlinear model around a given point in the sample space. We study the performance of our tests b...

  9. Test of the Brink-Axel Hypothesis for the Pygmy Dipole Resonance

    Science.gov (United States)

    Martin, D.; von Neumann-Cosel, P.; Tamii, A.; Aoi, N.; Bassauer, S.; Bertulani, C. A.; Carter, J.; Donaldson, L.; Fujita, H.; Fujita, Y.; Hashimoto, T.; Hatanaka, K.; Ito, T.; Krugmann, A.; Liu, B.; Maeda, Y.; Miki, K.; Neveling, R.; Pietralla, N.; Poltoratska, I.; Ponomarev, V. Yu.; Richter, A.; Shima, T.; Yamamoto, T.; Zweidinger, M.

    2017-11-01

    The gamma strength function and level density of 1- states in 96Mo have been extracted from a high-resolution study of the (p → , p→ ' ) reaction at 295 MeV and extreme forward angles. By comparison with compound nucleus γ decay experiments, this allows a test of the generalized Brink-Axel hypothesis in the energy region of the pygmy dipole resonance. The Brink-Axel hypothesis is commonly assumed in astrophysical reaction network calculations and states that the gamma strength function in nuclei is independent of the structure of the initial and final state. The present results validate the Brink-Axel hypothesis for 96Mo and provide independent confirmation of the methods used to separate gamma strength function and level density in γ decay experiments.

  10. An Exercise for Illustrating the Logic of Hypothesis Testing

    Science.gov (United States)

    Lawton, Leigh

    2009-01-01

    Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…

  11. The Method of Hypothesis in Plato's Philosophy

    Directory of Open Access Journals (Sweden)

    Malihe Aboie Mehrizi

    2016-09-01

    Full Text Available The article deals with the examination of method of hypothesis in Plato's philosophy. This method, respectively, will be examined in three dialogues of Meno, Phaedon and Republic in which it is explicitly indicated. It will be shown the process of change of Plato’s attitude towards the position and usage of the method of hypothesis in his realm of philosophy. In Meno, considering the geometry, Plato attempts to introduce a method that can be used in the realm of philosophy. But, ultimately in Republic, Plato’s special attention to the method and its importance in the philosophical investigations, leads him to revise it. Here, finally Plato introduces the particular method of philosophy, i.e., the dialectic

  12. Counselor Hypothesis Testing Strategies: The Role of Initial Impressions and Self-Schema.

    Science.gov (United States)

    Strohmer, Douglas C.; Chiodo, Anthony L.

    1984-01-01

    Presents two experiments concerning confirmatory bias in the way counselors collect data to test their hypotheses. Counselors were asked either to develop their own clinical hypothesis or were given a hypothesis to test. Confirmatory bias in hypothesis testing was not supported in either experiment. (JAC)

  13. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Improving the space surveillance telescope's performance using multi-hypothesis testing

    Energy Technology Data Exchange (ETDEWEB)

    Chris Zingarelli, J.; Cain, Stephen [Air Force Institute of Technology, 2950 Hobson Way, Bldg 641, Wright Patterson AFB, OH 45433 (United States); Pearce, Eric; Lambour, Richard [Lincoln Labratory, Massachusetts Institute of Technology, 244 Wood Street, Lexington, MA 02421 (United States); Blake, Travis [Defense Advanced Research Projects Agency, 675 North Randolph Street Arlington, VA 22203 (United States); Peterson, Curtis J. R., E-mail: John.Zingarelli@afit.edu [United States Air Force, 1690 Air Force Pentagon, Washington, DC 20330 (United States)

    2014-05-01

    The Space Surveillance Telescope (SST) is a Defense Advanced Research Projects Agency program designed to detect objects in space like near Earth asteroids and space debris in the geosynchronous Earth orbit (GEO) belt. Binary hypothesis test (BHT) methods have historically been used to facilitate the detection of new objects in space. In this paper a multi-hypothesis detection strategy is introduced to improve the detection performance of SST. In this context, the multi-hypothesis testing (MHT) determines if an unresolvable point source is in either the center, a corner, or a side of a pixel in contrast to BHT, which only tests whether an object is in the pixel or not. The images recorded by SST are undersampled such as to cause aliasing, which degrades the performance of traditional detection schemes. The equations for the MHT are derived in terms of signal-to-noise ratio (S/N), which is computed by subtracting the background light level around the pixel being tested and dividing by the standard deviation of the noise. A new method for determining the local noise statistics that rejects outliers is introduced in combination with the MHT. An experiment using observations of a known GEO satellite are used to demonstrate the improved detection performance of the new algorithm over algorithms previously reported in the literature. The results show a significant improvement in the probability of detection by as much as 50% over existing algorithms. In addition to detection, the S/N results prove to be linearly related to the least-squares estimates of point source irradiance, thus improving photometric accuracy.

  15. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  16. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  17. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  18. Water Pollution Detection Based on Hypothesis Testing in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xu Luo

    2017-01-01

    Full Text Available Water pollution detection is of great importance in water conservation. In this paper, the water pollution detection problems of the network and of the node in sensor networks are discussed. The detection problems in both cases of the distribution of the monitoring noise being normal and nonnormal are considered. The pollution detection problems are analyzed based on hypothesis testing theory firstly; then, the specific detection algorithms are given. Finally, two implementation examples are given to illustrate how the proposed detection methods are used in the water pollution detection in sensor networks and prove the effectiveness of the proposed detection methods.

  19. Explorations in Statistics: Hypothesis Tests and P Values

    Science.gov (United States)

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  20. Different meaning of the p-value in exploratory and confirmatory hypothesis testing

    DEFF Research Database (Denmark)

    Gerke, Oke; Høilund-Carlsen, Poul Flemming; Vach, Werner

    2011-01-01

    The outcome of clinical studies is often reduced to the statistical significance of results by indicating a p-value below the 5% significance level. Hypothesis testing and, through that, the p-value is commonly used, but their meaning is frequently misinterpreted in clinical research. The concept...... of hypothesis testing is explained and some pitfalls including those of multiple testing are given. The conceptual difference between exploratory and confirmatory hypothesis testing is discussed, and a better use of p-values, which includes presenting p-values with two or three decimals, is suggested....

  1. The frequentist implications of optional stopping on Bayesian hypothesis tests.

    Science.gov (United States)

    Sanborn, Adam N; Hills, Thomas T

    2014-04-01

    Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.

  2. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian

    2008-01-01

    Glahn, C. (2008). Cross-system log file analysis for hypothesis testing. Presented at Empowering Learners for Lifelong Competence Development: pedagogical, organisational and technological issues. 4th TENCompetence Open Workshop. April, 10, 2008, Madrid, Spain.

  3. Comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1996-07-01

    In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).

  4. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  5. P value and the theory of hypothesis testing: an explanation for new researchers.

    Science.gov (United States)

    Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël

    2010-03-01

    In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

  6. A test of the reward-value hypothesis.

    Science.gov (United States)

    Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D

    2017-03-01

    Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.

  7. Gaussian Hypothesis Testing and Quantum Illumination.

    Science.gov (United States)

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  8. Mechanisms of eyewitness suggestibility: tests of the explanatory role hypothesis.

    Science.gov (United States)

    Rindal, Eric J; Chrobak, Quin M; Zaragoza, Maria S; Weihing, Caitlin A

    2017-10-01

    In a recent paper, Chrobak and Zaragoza (Journal of Experimental Psychology: General, 142(3), 827-844, 2013) proposed the explanatory role hypothesis, which posits that the likelihood of developing false memories for post-event suggestions is a function of the explanatory function the suggestion serves. In support of this hypothesis, they provided evidence that participant-witnesses were especially likely to develop false memories for their forced fabrications when their fabrications helped to explain outcomes they had witnessed. In three experiments, we test the generality of the explanatory role hypothesis as a mechanism of eyewitness suggestibility by assessing whether this hypothesis can predict suggestibility errors in (a) situations where the post-event suggestions are provided by the experimenter (as opposed to fabricated by the participant), and (b) across a variety of memory measures and measures of recollective experience. In support of the explanatory role hypothesis, participants were more likely to subsequently freely report (E1) and recollect the suggestions as part of the witnessed event (E2, source test) when the post-event suggestion helped to provide a causal explanation for a witnessed outcome than when it did not serve this explanatory role. Participants were also less likely to recollect the suggestions as part of the witnessed event (on measures of subjective experience) when their explanatory strength had been reduced by the presence of an alternative explanation that could explain the same outcome (E3, source test + warning). Collectively, the results provide strong evidence that the search for explanatory coherence influences people's tendency to misremember witnessing events that were only suggested to them.

  9. A default Bayesian hypothesis test for correlations and partial correlations

    NARCIS (Netherlands)

    Wetzels, R.; Wagenmakers, E.J.

    2012-01-01

    We propose a default Bayesian hypothesis test for the presence of a correlation or a partial correlation. The test is a direct application of Bayesian techniques for variable selection in regression models. The test is easy to apply and yields practical advantages that the standard frequentist tests

  10. Hypothesis test for synchronization: twin surrogates revisited.

    Science.gov (United States)

    Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf

    2009-03-01

    The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.

  11. A test of the reward-contrast hypothesis.

    Science.gov (United States)

    Dalecki, Stefan J; Panoz-Brown, Danielle E; Crystal, Jonathon D

    2017-12-01

    Source memory, a facet of episodic memory, is the memory of the origin of information. Whereas source memory in rats is sustained for at least a week, spatial memory degraded after approximately a day. Different forgetting functions may suggest that two memory systems (source memory and spatial memory) are dissociated. However, in previous work, the two tasks used baiting conditions consisting of chocolate and chow flavors; notably, the source memory task used the relatively better flavor. Thus, according to the reward-contrast hypothesis, when chocolate and chow were presented within the same context (i.e., within a single radial maze trial), the chocolate location was more memorable than the chow location because of contrast. We tested the reward-contrast hypothesis using baiting configurations designed to produce reward-contrast. The reward-contrast hypothesis predicts that under these conditions, spatial memory will survive a 24-h retention interval. We documented elimination of spatial memory performance after a 24-h retention interval using a reward-contrast baiting pattern. These data suggest that reward contrast does not explain our earlier findings that source memory survives unusually long retention intervals. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Dale [Los Alamos National Laboratory; Selby, Neil [AWE Blacknest

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  13. Decentralized Hypothesis Testing in Energy Harvesting Wireless Sensor Networks

    Science.gov (United States)

    Tarighati, Alla; Gross, James; Jalden, Joakim

    2017-09-01

    We consider the problem of decentralized hypothesis testing in a network of energy harvesting sensors, where sensors make noisy observations of a phenomenon and send quantized information about the phenomenon towards a fusion center. The fusion center makes a decision about the present hypothesis using the aggregate received data during a time interval. We explicitly consider a scenario under which the messages are sent through parallel access channels towards the fusion center. To avoid limited lifetime issues, we assume each sensor is capable of harvesting all the energy it needs for the communication from the environment. Each sensor has an energy buffer (battery) to save its harvested energy for use in other time intervals. Our key contribution is to formulate the problem of decentralized detection in a sensor network with energy harvesting devices. Our analysis is based on a queuing-theoretic model for the battery and we propose a sensor decision design method by considering long term energy management at the sensors. We show how the performance of the system changes for different battery capacities. We then numerically show how our findings can be used in the design of sensor networks with energy harvesting sensors.

  14. Chi-square test and its application in hypothesis testing

    Directory of Open Access Journals (Sweden)

    Rakesh Rana

    2015-01-01

    Full Text Available In medical research, there are studies which often collect data on categorical variables that can be summarized as a series of counts. These counts are commonly arranged in a tabular format known as a contingency table. The chi-square test statistic can be used to evaluate whether there is an association between the rows and columns in a contingency table. More specifically, this statistic can be used to determine whether there is any difference between the study groups in the proportions of the risk factor of interest. Chi-square test and the logic of hypothesis testing were developed by Karl Pearson. This article describes in detail what is a chi-square test, on which type of data it is used, the assumptions associated with its application, how to manually calculate it and how to make use of an online calculator for calculating the Chi-square statistics and its associated P-value.

  15. Trends in hypothesis testing and related variables in nursing research: a retrospective exploratory study.

    Science.gov (United States)

    Lash, Ayhan Aytekin; Plonczynski, Donna J; Sehdev, Amikar

    2011-01-01

    To compare the inclusion and the influences of selected variables on hypothesis testing during the 1980s and 1990s. In spite of the emphasis on conducting inquiry consistent with the tenets of logical positivism, there have been no studies investigating the frequency and patterns of hypothesis testing in nursing research The sample was obtained from the journal Nursing Research which was the research journal with the highest circulation during the study period under study. All quantitative studies published during the two decades including briefs and historical studies were included in the analyses A retrospective design was used to select the sample. Five years from the 1980s and 1990s each were randomly selected from the journal, Nursing Research. Of the 582 studies, 517 met inclusion criteria. Findings suggest that there has been a decline in the use of hypothesis testing in the last decades of the 20th century. Further research is needed to identify the factors that influence the conduction of research with hypothesis testing. Hypothesis testing in nursing research showed a steady decline from the 1980s to 1990s. Research purposes of explanation, and prediction/ control increased the likelihood of hypothesis testing. Hypothesis testing strengthens the quality of the quantitative studies, increases the generality of findings and provides dependable knowledge. This is particularly true for quantitative studies that aim to explore, explain and predict/control phenomena and/or test theories. The findings also have implications for doctoral programmes, research preparation of nurse-investigators, and theory testing.

  16. Hypothesis testing in students: Sequences, stages, and instructional strategies

    Science.gov (United States)

    Moshman, David; Thompson, Pat A.

    Six sequences in the development of hypothesis-testing conceptions are proposed, involving (a) interpretation of the hypothesis; (b) the distinction between using theories and testing theories; (c) the consideration of multiple possibilities; (d) the relation of theory and data; (e) the nature of verification and falsification; and (f) the relation of truth and falsity. An alternative account is then provided involving three global stages: concrete operations, formal operations, and a postformal metaconstructivestage. Relative advantages and difficulties of the stage and sequence conceptualizations are discussed. Finally, three families of teaching strategy are distinguished, which emphasize, respectively: (a) social transmission of knowledge; (b) carefully sequenced empirical experience by the student; and (c) self-regulated cognitive activity of the student. It is argued on the basis of Piaget's theory that the last of these plays a crucial role in the construction of such logical reasoning strategies as those involved in testing hypotheses.

  17. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus; Schoonenboom, Judith; Sligte, Henk; Moghnieh, Ayman; Hernández-Leo, Davinia; Stefanov, Krassen; Lemmers, Ruud; Koper, Rob

    2008-01-01

    Glahn, C., Specht, M., Schoonenboom, J., Sligte, H., Moghnieh, A., Hernández-Leo, D. Stefanov, K., Lemmers, R., & Koper, R. (2008). Cross-system log file analysis for hypothesis testing. In H. Sligte & R. Koper (Eds.), Proceedings of the 4th TENCompetence Open Workshop. Empowering Learners for

  18. A "Projective" Test of the Golden Section Hypothesis.

    Science.gov (United States)

    Lee, Chris; Adams-Webber, Jack

    1987-01-01

    In a projective test of the golden section hypothesis, 24 high school students rated themselves and 10 comic strip characters on basis of 12 bipolar constructs. Overall proportion of cartoon figures which subjects assigned to positive poles of constructs was very close to golden section. (Author/NB)

  19. Animal Models for Testing the DOHaD Hypothesis

    Science.gov (United States)

    Since the seminal work in human populations by David Barker and colleagues, several species of animals have been used in the laboratory to test the Developmental Origins of Health and Disease (DOHaD) hypothesis. Rats, mice, guinea pigs, sheep, pigs and non-human primates have bee...

  20. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    Science.gov (United States)

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  1. Tests of the lunar hypothesis

    Science.gov (United States)

    Taylor, S. R.

    1984-01-01

    The concept that the Moon was fissioned from the Earth after core separation is the most readily testable hypothesis of lunar origin, since direct comparisons of lunar and terrestrial compositions can be made. Differences found in such comparisons introduce so many ad hoc adjustments to the fission hypothesis that it becomes untestable. Further constraints may be obtained from attempting to date the volatile-refractory element fractionation. The combination of chemical and isotopic problems suggests that the fission hypothesis is no longer viable, and separate terrestrial and lunar accretion from a population of fractionated precursor planetesimals provides a more reasonable explanation.

  2. Mothers Who Kill Their Offspring: Testing Evolutionary Hypothesis in a 110-Case Italian Sample

    Science.gov (United States)

    Camperio Ciani, Andrea S.; Fontanesi, Lilybeth

    2012-01-01

    Objectives: This research aimed to identify incidents of mothers in Italy killing their own children and to test an adaptive evolutionary hypothesis to explain their occurrence. Methods: 110 cases of mothers killing 123 of their own offspring from 1976 to 2010 were analyzed. Each case was classified using 13 dichotomic variables. Descriptive…

  3. Testing the hypothesis that treatment can eliminate HIV

    DEFF Research Database (Denmark)

    Okano, Justin T; Robbins, Danielle; Palk, Laurence

    2016-01-01

    BACKGROUND: Worldwide, approximately 35 million individuals are infected with HIV; about 25 million of these live in sub-Saharan Africa. WHO proposes using treatment as prevention (TasP) to eliminate HIV. Treatment suppresses viral load, decreasing the probability an individual transmits HIV....... The elimination threshold is one new HIV infection per 1000 individuals. Here, we test the hypothesis that TasP can substantially reduce epidemics and eliminate HIV. We estimate the impact of TasP, between 1996 and 2013, on the Danish HIV epidemic in men who have sex with men (MSM), an epidemic UNAIDS has...... identified as a priority for elimination. METHODS: We use a CD4-staged Bayesian back-calculation approach to estimate incidence, and the hidden epidemic (the number of HIV-infected undiagnosed MSM). To develop the back-calculation model, we use data from an ongoing nationwide population-based study...

  4. Testing the Cross-Racial Generality of Spearman's Hypothesis in Two Samples

    Science.gov (United States)

    Hartmann, Peter; Kruuse, Nanna Hye Sun; Nyborg, Helmuth

    2007-01-01

    Spearman's hypothesis states that racial differences in IQ between Blacks (B) and Whites (W) are due primarily to differences in the "g" factor. This hypothesis is often confirmed, but it is less certain whether it generalizes to other races. We therefore tested its cross-racial generality by comparing American subjects of European…

  5. Hypothesis Testing Using the Films of the Three Stooges

    Science.gov (United States)

    Gardner, Robert; Davidson, Robert

    2010-01-01

    The use of The Three Stooges' films as a source of data in an introductory statistics class is described. The Stooges' films are separated into three populations. Using these populations, students may conduct hypothesis tests with data they collect.

  6. Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment

    Science.gov (United States)

    Frane, Andrew V.

    2015-01-01

    Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…

  7. A Critique of One-Tailed Hypothesis Test Procedures in Business and Economics Statistics Textbooks.

    Science.gov (United States)

    Liu, Tung; Stone, Courtenay C.

    1999-01-01

    Surveys introductory business and economics statistics textbooks and finds that they differ over the best way to explain one-tailed hypothesis tests: the simple null-hypothesis approach or the composite null-hypothesis approach. Argues that the composite null-hypothesis approach contains methodological shortcomings that make it more difficult for…

  8. A checklist to facilitate objective hypothesis testing in social psychology research.

    Science.gov (United States)

    Washburn, Anthony N; Morgan, G Scott; Skitka, Linda J

    2015-01-01

    Social psychology is not a very politically diverse area of inquiry, something that could negatively affect the objectivity of social psychological theory and research, as Duarte et al. argue in the target article. This commentary offers a number of checks to help researchers uncover possible biases and identify when they are engaging in hypothesis confirmation and advocacy instead of hypothesis testing.

  9. Privacy on Hypothesis Testing in Smart Grids

    OpenAIRE

    Li, Zuxing; Oechtering, Tobias

    2015-01-01

    In this paper, we study the problem of privacy information leakage in a smart grid. The privacy risk is assumed to be caused by an unauthorized binary hypothesis testing of the consumer's behaviour based on the smart meter readings of energy supplies from the energy provider. Another energy supplies are produced by an alternative energy source. A controller equipped with an energy storage device manages the energy inflows to satisfy the energy demand of the consumer. We study the optimal ener...

  10. Hypothesis testing on the fractal structure of behavioral sequences: the Bayesian assessment of scaling methodology.

    Science.gov (United States)

    Moscoso del Prado Martín, Fermín

    2013-12-01

    I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Consumer health information seeking as hypothesis testing.

    Science.gov (United States)

    Keselman, Alla; Browne, Allen C; Kaufman, David R

    2008-01-01

    Despite the proliferation of consumer health sites, lay individuals often experience difficulty finding health information online. The present study attempts to understand users' information seeking difficulties by drawing on a hypothesis testing explanatory framework. It also addresses the role of user competencies and their interaction with internet resources. Twenty participants were interviewed about their understanding of a hypothetical scenario about a family member suffering from stable angina and then searched MedlinePlus consumer health information portal for information on the problem presented in the scenario. Participants' understanding of heart disease was analyzed via semantic analysis. Thematic coding was used to describe information seeking trajectories in terms of three key strategies: verification of the primary hypothesis, narrowing search within the general hypothesis area and bottom-up search. Compared to an expert model, participants' understanding of heart disease involved different key concepts, which were also differently grouped and defined. This understanding provided the framework for search-guiding hypotheses and results interpretation. Incorrect or imprecise domain knowledge led individuals to search for information on irrelevant sites, often seeking out data to confirm their incorrect initial hypotheses. Online search skills enhanced search efficiency, but did not eliminate these difficulties. Regardless of their web experience and general search skills, lay individuals may experience difficulty with health information searches. These difficulties may be related to formulating and evaluating hypotheses that are rooted in their domain knowledge. Informatics can provide support at the levels of health information portals, individual websites, and consumer education tools.

  12. TEST OF THE CATCH-UP HYPOTHESIS IN AFRICAN AGRICULTURAL GROWTH RATES

    Directory of Open Access Journals (Sweden)

    Kalu Ukpai IFEGWU

    2015-11-01

    Full Text Available The paper tested the catch-up hypothesis in agricultural growth rates of twenty-six African countries. Panel data used was drawn from the Food and Agricultural Organization Statistics (FAOSTAT of the United Nations. The Data Envelopment Analysis Method for measuring productivity was used to estimate productivity growth rates. The cross-section framework consisting of sigma-convergence and beta-convergence was employed to test the catching up process. Catching up is said to exist if the value of beta is negative and significant. Since catching up does not necessarily imply narrowing of national productivity inequalities, sigma-convergence which measures inequality, was estimated for the same variables. The results showed evidence of the catch-up process, but failed to find a narrowing of productivity inequalities among countries.

  13. The efficient market hypothesis: problems with interpretations of empirical tests

    Directory of Open Access Journals (Sweden)

    Denis Alajbeg

    2012-03-01

    Full Text Available Despite many “refutations” in empirical tests, the efficient market hypothesis (EMH remains the central concept of financial economics. The EMH’s resistance to the results of empirical testing emerges from the fact that the EMH is not a falsifiable theory. Its axiomatic definition shows how asset prices would behave under assumed conditions. Testing for this price behavior does not make much sense as the conditions in the financial markets are much more complex than the simplified conditions of perfect competition, zero transaction costs and free information used in the formulation of the EMH. Some recent developments within the tradition of the adaptive market hypothesis are promising regarding development of a falsifiable theory of price formation in financial markets, but are far from giving assurance that we are approaching a new formulation. The most that can be done in the meantime is to be very cautious while interpreting the empirical evidence that is presented as “testing” the EMH.

  14. Tests of the planetary hypothesis for PTFO 8-8695b

    DEFF Research Database (Denmark)

    Yu, Liang; Winn, Joshua N.; Gillon, Michaël

    2015-01-01

    The T Tauri star PTFO 8-8695 exhibits periodic fading events that have been interpreted as the transits of a giant planet on a precessing orbit. Here we present three tests of the planet hypothesis. First, we sought evidence for the secular changes in light-curve morphology that are predicted...... planetary orbit. Our spectroscopy also revealed strong, time-variable, high-velocity H{\\alpha} and Ca H & K emission features. All these observations cast doubt on the planetary hypothesis, and suggest instead that the fading events represent starspots, eclipses by circumstellar dust, or occultations...

  15. The limits to pride: A test of the pro-anorexia hypothesis.

    Science.gov (United States)

    Cornelius, Talea; Blanton, Hart

    2016-01-01

    Many social psychological models propose that positive self-conceptions promote self-esteem. An extreme version of this hypothesis is advanced in "pro-anorexia" communities: identifying with anorexia, in conjunction with disordered eating, can lead to higher self-esteem. The current study empirically tested this hypothesis. Results challenge the pro-anorexia hypothesis. Although those with higher levels of pro-anorexia identification trended towards higher self-esteem with increased disordered eating, this did not overcome the strong negative main effect of pro-anorexia identification. These data suggest a more effective strategy for promoting self-esteem is to encourage rejection of disordered eating and an anorexic identity.

  16. Is conscious stimulus identification dependent on knowledge of the perceptual modality? Testing the "source misidentification hypothesis"

    DEFF Research Database (Denmark)

    Overgaard, Morten; Lindeløv, Jonas Kristoffer; Svejstrup, Stinna

    2013-01-01

    This paper reports an experiment intended to test a particular hypothesis derived from blindsight research, which we name the “source misidentification hypothesis.” According to this hypothesis, a subject may be correct about a stimulus without being correct about how she had access...... to this knowledge (whether the stimulus was visual, auditory, or something else). We test this hypothesis in healthy subjects, asking them to report whether a masked stimulus was presented auditorily or visually, what the stimulus was, and how clearly they experienced the stimulus using the Perceptual Awareness...... experience of the stimulus. To demonstrate that particular levels of reporting accuracy are obtained, we employ a statistical strategy, which operationally tests the hypothesis of non-equality, such that the usual rejection of the null-hypothesis admits the conclusion of equivalence....

  17. Shaping Up the Practice of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Wainer, Howard; Robinson, Daniel H.

    2003-01-01

    Discusses criticisms of null hypothesis significance testing (NHST), suggesting that historical use of NHST was reasonable, and current users should read Sir Ronald Fisher's applied work. Notes that modifications to NHST and interpretations of its outcomes might better suit the needs of modern science. Concludes that NHST is most often useful as…

  18. Adolescents' Body Image Trajectories: A Further Test of the Self-Equilibrium Hypothesis

    Science.gov (United States)

    Morin, Alexandre J. S.; Maïano, Christophe; Scalas, L. Francesca; Janosz, Michel; Litalien, David

    2017-01-01

    The self-equilibrium hypothesis underlines the importance of having a strong core self, which is defined as a high and developmentally stable self-concept. This study tested this hypothesis in relation to body image (BI) trajectories in a sample of 1,006 adolescents (M[subscript age] = 12.6, including 541 males and 465 females) across a 4-year…

  19. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  20. A sequential hypothesis test based on a generalized Azuma inequality

    NARCIS (Netherlands)

    Reijsbergen, D.P.; Scheinhardt, Willem R.W.; de Boer, Pieter-Tjerk

    We present a new power-one sequential hypothesis test based on a bound for the probability that a bounded zero-mean martingale ever crosses a curve of the form $a(n+k)^b$. The proof of the bound is of independent interest.

  1. Correlates of androgens in wild male Barbary macaques: Testing the challenge hypothesis.

    Science.gov (United States)

    Rincon, Alan V; Maréchal, Laëtitia; Semple, Stuart; Majolo, Bonaventura; MacLarnon, Ann

    2017-10-01

    Investigating causes and consequences of variation in hormonal expression is a key focus in behavioral ecology. Many studies have explored patterns of secretion of the androgen testosterone in male vertebrates, using the challenge hypothesis (Wingfield, Hegner, Dufty, & Ball, 1990; The American Naturalist, 136(6), 829-846) as a theoretical framework. Rather than the classic association of testosterone with male sexual behavior, this hypothesis predicts that high levels of testosterone are associated with male-male reproductive competition but also inhibit paternal care. The hypothesis was originally developed for birds, and subsequently tested in other vertebrate taxa, including primates. Such studies have explored the link between testosterone and reproductive aggression as well as other measures of mating competition, or between testosterone and aspects of male behavior related to the presence of infants. Very few studies have simultaneously investigated the links between testosterone and male aggression, other aspects of mating competition and infant-related behavior. We tested predictions derived from the challenge hypothesis in wild male Barbary macaques (Macaca sylvanus), a species with marked breeding seasonality and high levels of male-infant affiliation, providing a powerful test of this theoretical framework. Over 11 months, 251 hr of behavioral observations and 296 fecal samples were collected from seven adult males in the Middle Atlas Mountains, Morocco. Fecal androgen levels rose before the onset of the mating season, during a period of rank instability, and were positively related to group mating activity across the mating season. Androgen levels were unrelated to rates of male-male aggression in any period, but higher ranked males had higher levels in both the mating season and in the period of rank instability. Lower androgen levels were associated with increased rates of male-infant grooming during the mating and unstable periods. Our results

  2. Semiparametric Power Envelopes for Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael

    This paper derives asymptotic power envelopes for tests of the unit root hypothesis in a zero-mean AR(1) model. The power envelopes are derived using the limits of experiments approach and are semiparametric in the sense that the underlying error distribution is treated as an unknown...

  3. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses

    NARCIS (Netherlands)

    Kuiper, Rebecca M.; Nederhoff, Tim; Klugkist, Irene

    2015-01-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is

  4. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    Science.gov (United States)

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.

  5. Using Employer Hiring Behavior to Test the Educational Signaling Hypothesis

    NARCIS (Netherlands)

    Albrecht, J.W.; van Ours, J.C.

    2001-01-01

    This paper presents a test of the educational signaling hypothesis.If employers use education as a signal in the hiring process, they will rely more on education when less is otherwise known about applicants.We nd that employers are more likely to lower educational standards when an informal, more

  6. Local hypothesis testing between a pure bipartite state and the white noise state

    OpenAIRE

    Owari, Masaki; Hayashi, Masahito

    2010-01-01

    In this paper, we treat a local discrimination problem in the framework of asymmetric hypothesis testing. We choose a known bipartite pure state $\\ket{\\Psi}$ as an alternative hypothesis, and the completely mixed state as a null hypothesis. As a result, we analytically derive an optimal type 2 error and an optimal POVM for one-way LOCC POVM and Separable POVM. For two-way LOCC POVM, we study a family of simple three-step LOCC protocols, and show that the best protocol in this family has stric...

  7. A test of the domain-specific acculturation strategy hypothesis.

    Science.gov (United States)

    Miller, Matthew J; Yang, Minji; Lim, Robert H; Hui, Kayi; Choi, Na-Yeun; Fan, Xiaoyan; Lin, Li-Ling; Grome, Rebekah E; Farrell, Jerome A; Blackmon, Sha'kema

    2013-01-01

    Acculturation literature has evolved over the past several decades and has highlighted the dynamic ways in which individuals negotiate experiences in multiple cultural contexts. The present study extends this literature by testing M. J. Miller and R. H. Lim's (2010) domain-specific acculturation strategy hypothesis-that individuals might use different acculturation strategies (i.e., assimilated, bicultural, separated, and marginalized strategies; J. W. Berry, 2003) across behavioral and values domains-in 3 independent cluster analyses with Asian American participants. Present findings supported the domain-specific acculturation strategy hypothesis as 67% to 72% of participants from 3 independent samples using different strategies across behavioral and values domains. Consistent with theory, a number of acculturation strategy cluster group differences emerged across generational status, acculturative stress, mental health symptoms, and attitudes toward seeking professional psychological help. Study limitations and future directions for research are discussed.

  8. Testing for Marshall-Lerner hypothesis: A panel approach

    Science.gov (United States)

    Azizan, Nur Najwa; Sek, Siok Kun

    2014-12-01

    The relationship between real exchange rate and trade balances are documented in many theories. One of the theories is the so-called Marshall-Lerner condition. In this study, we seek to test for the validity of Marshall-Lerner hypothesis, i.e. to reveal if the depreciation of real exchange rate leads to the improvement in trade balances. We focus our study in ASEAN-5 countries and their main trade partners of U.S., Japan and China. The dynamic panel data of pooled mean group (PMG) approach is used to detect the Marshall-Lerner hypothesis among ASEAN-5, between ASEAN-5 and U.S., between ASEAN-5 and Japan and between ASEAN-5 and China respectively. The estimation is based on the autoregressive Distributed Lag or ARDL model for the period of 1970-2012. The paper concludes that Marshal Lerner theory does not hold in bilateral trades in four groups of countries. The trade balances of ASEAN5 are mainly determined by the domestic income level and foreign production cost.

  9. A test of the substitution-habitat hypothesis in amphibians.

    Science.gov (United States)

    Martínez-Abraín, Alejandro; Galán, Pedro

    2017-12-08

    Most examples that support the substitution-habitat hypothesis (human-made habitats act as substitutes of original habitat) deal with birds and mammals. We tested this hypothesis in 14 amphibians by using percentage occupancy as a proxy of habitat quality (i.e., higher occupancy percentages indicate higher quality). We classified water body types as original habitat (no or little human influence) depending on anatomical, behavioral, or physiological adaptations of each amphibian species. Ten species had relatively high probabilities (0.16-0.28) of occurrence in original habitat, moderate probability of occurrence in substitution habitats (0.11-0.14), and low probability of occurrence in refuge habitats (0.05-0.08). Thus, the substitution-habitat hypothesis only partially applies to amphibians because the low occupancy of refuges could be due to the negligible human persecution of this group (indicating good conservation status). However, low occupancy of refuges could also be due to low tolerance of refuge conditions, which could have led to selective extinction or colonization problems due to poor dispersal capabilities. That original habitats had the highest probabilities of occupancy suggests amphibians have a good conservation status in the region. They also appeared highly adaptable to anthropogenic substitution habitats. © 2017 Society for Conservation Biology.

  10. Chaotic annealing with hypothesis test for function optimization in noisy environments

    International Nuclear Information System (INIS)

    Pan Hui; Wang Ling; Liu Bo

    2008-01-01

    As a special mechanism to avoid being trapped in local minimum, the ergodicity property of chaos has been used as a novel searching technique for optimization problems, but there is no research work on chaos for optimization in noisy environments. In this paper, the performance of chaotic annealing (CA) for uncertain function optimization is investigated, and a new hybrid approach (namely CAHT) that combines CA and hypothesis test (HT) is proposed. In CAHT, the merits of CA are applied for well exploration and exploitation in searching space, and solution quality can be identified reliably by hypothesis test to reduce the repeated search to some extent and to reasonably estimate performance for solution. Simulation results and comparisons show that, chaos is helpful to improve the performance of SA for uncertain function optimization, and CAHT can further improve the searching efficiency, quality and robustness

  11. Advertising investment as a tool for boosting consumption: testing Galbraith's hypothesis for Spain

    Directory of Open Access Journals (Sweden)

    Valentín-Alejandro Martínez-Fernández

    2014-12-01

    Full Text Available The recession that most of the world economies have been facing in the last years has caused a great interest in the study of its macroeconomic effects. In this context, a debate has resurged regarding the advertising investment, as for its potential capacity to impel the consumer spending and to impact positively on the economic recovery. This idea, sustained in the so-called Galbraith's hypothesis, constitutes the core of this paper, where the main objective is to test that hypothesis by means of an empirical analysis. In this study, we focus on the Spanish case and the data correspond to the period 1976 -2010. A cointegration analysis is carried out, using two different approaches (Engle-Granger test and Gregory-Hansen test, respectively, to determine if there is any relationship between the advertising investment and six macromagnitudes (GDP, National Income, Consumption, Savings and Fixed Capital Formation, as well as the registered unemployment rate. Based on the results obtained, we conclude that Galbraith's hypothesis is not fulfilled for the Spanish case.

  12. Feasibility of Combining Common Data Elements Across Studies to Test a Hypothesis.

    Science.gov (United States)

    Corwin, Elizabeth J; Moore, Shirley M; Plotsky, Andrea; Heitkemper, Margaret M; Dorsey, Susan G; Waldrop-Valverde, Drenna; Bailey, Donald E; Docherty, Sharron L; Whitney, Joanne D; Musil, Carol M; Dougherty, Cynthia M; McCloskey, Donna J; Austin, Joan K; Grady, Patricia A

    2017-05-01

    The purpose of this article is to describe the outcomes of a collaborative initiative to share data across five schools of nursing in order to evaluate the feasibility of collecting common data elements (CDEs) and developing a common data repository to test hypotheses of interest to nursing scientists. This initiative extended work already completed by the National Institute of Nursing Research CDE Working Group that successfully identified CDEs related to symptoms and self-management, with the goal of supporting more complex, reproducible, and patient-focused research. Two exemplars describing the group's efforts are presented. The first highlights a pilot study wherein data sets from various studies by the represented schools were collected retrospectively, and merging of the CDEs was attempted. The second exemplar describes the methods and results of an initiative at one school that utilized a prospective design for the collection and merging of CDEs. Methods for identifying a common symptom to be studied across schools and for collecting the data dictionaries for the related data elements are presented for the first exemplar. The processes for defining and comparing the concepts and acceptable values, and for evaluating the potential to combine and compare the data elements are also described. Presented next are the steps undertaken in the second exemplar to prospectively identify CDEs and establish the data dictionaries. Methods for common measurement and analysis strategies are included. Findings from the first exemplar indicated that without plans in place a priori to ensure the ability to combine and compare data from disparate sources, doing so retrospectively may not be possible, and as a result hypothesis testing across studies may be prohibited. Findings from the second exemplar, however, indicated that a plan developed prospectively to combine and compare data sets is feasible and conducive to merged hypothesis testing. Although challenges exist in

  13. A Hybrid Positioning Method Based on Hypothesis Testing

    DEFF Research Database (Denmark)

    Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed

    2012-01-01

    maxima. We propose to first estimate the support region of the two peaks of the likelihood function using a set membership method, and then decide between the two regions using a rule based on the less reliable observations. Monte Carlo simulations show that the performance of the proposed method...

  14. Is it better to select or to receive? Learning via active and passive hypothesis testing.

    Science.gov (United States)

    Markant, Douglas B; Gureckis, Todd M

    2014-02-01

    People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.

  15. An algorithm for testing the efficient market hypothesis.

    Directory of Open Access Journals (Sweden)

    Ioana-Andreea Boboc

    Full Text Available The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA, Moving Average Convergence Divergence (MACD, Relative Strength Index (RSI and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH.

  16. An algorithm for testing the efficient market hypothesis.

    Science.gov (United States)

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  17. Is the Economic andTesting the Efficient Markets Hypothesis on the Romanian Capital Market

    Directory of Open Access Journals (Sweden)

    Dragoș Mînjină

    2013-11-01

    Full Text Available Informational efficiency of capital markets has been the subject of numerous empirical studies. Intensive research of the field is justified by the important implications of the knowledge of the of informational efficiency level in the financial practice. Empirical studies that have tested the efficient markets hypothesis on the Romanian capital market revealed mostly that this market is not characterised by the weak form of the efficient markets hypothesis. However, recent empirical studies have obtained results for the weak form of the efficient markets hypothesis. The present decline period of the Romanian capital market, recorded on the background of adverse economic developments internally and externally, will be an important test for the continuation of recent positive developments, manifested the level of informational efficiency too.

  18. The Relation between Parental Values and Parenting Behavior: A Test of the Kohn Hypothesis.

    Science.gov (United States)

    Luster, Tom; And Others

    1989-01-01

    Used data on 65 mother-infant dyads to test Kohn's hypothesis concerning the relation between values and parenting behavior. Findings support Kohn's hypothesis that parents who value self-direction would emphasize supportive function of parenting and parents who value conformity would emphasize their obligations to impose restraints. (Author/NB)

  19. Giant Panda Maternal Care: A Test of the Experience Constraint Hypothesis

    Science.gov (United States)

    Snyder, Rebecca J.; Perdue, Bonnie M.; Zhang, Zhihe; Maple, Terry L.; Charlton, Benjamin D.

    2016-01-01

    The body condition constraint and the experience condition constraint hypotheses have both been proposed to account for differences in reproductive success between multiparous (experienced) and primiparous (first-time) mothers. However, because primiparous mothers are typically characterized by both inferior body condition and lack of experience when compared to multiparous mothers, interpreting experience related differences in maternal care as support for either the body condition constraint hypothesis or the experience constraint hypothesis is extremely difficult. Here, we examined maternal behaviour in captive giant pandas, allowing us to simultaneously control for body condition and provide a rigorous test of the experience constraint hypothesis in this endangered animal. We found that multiparous mothers spent more time engaged in key maternal behaviours (nursing, grooming, and holding cubs) and had significantly less vocal cubs than primiparous mothers. This study provides the first evidence supporting the experience constraint hypothesis in the order Carnivora, and may have utility for captive breeding programs in which it is important to monitor the welfare of this species’ highly altricial cubs, whose survival is almost entirely dependent on receiving adequate maternal care during the first few weeks of life. PMID:27272352

  20. Testing the activitystat hypothesis: a randomised controlled trial protocol.

    Science.gov (United States)

    Gomersall, Sjaan; Maher, Carol; Norton, Kevin; Dollman, Jim; Tomkinson, Grant; Esterman, Adrian; English, Coralie; Lewis, Nicole; Olds, Tim

    2012-10-08

    The activitystat hypothesis proposes that when physical activity or energy expenditure is increased or decreased in one domain, there will be a compensatory change in another domain to maintain an overall, stable level of physical activity or energy expenditure. To date, there has been no experimental study primarily designed to test the activitystat hypothesis in adults. The aim of this trial is to determine the effect of two different imposed exercise loads on total daily energy expenditure and physical activity levels. This study will be a randomised, multi-arm, parallel controlled trial. Insufficiently active adults (as determined by the Active Australia survey) aged 18-60 years old will be recruited for this study (n=146). Participants must also satisfy the Sports Medicine Australia Pre-Exercise Screening System and must weigh less than 150 kg. Participants will be randomly assigned to one of three groups using a computer-generated allocation sequence. Participants in the Moderate exercise group will receive an additional 150 minutes of moderate to vigorous physical activity per week for six weeks, and those in the Extensive exercise group will receive an additional 300 minutes of moderate to vigorous physical activity per week for six weeks. Exercise targets will be accumulated through both group and individual exercise sessions monitored by heart rate telemetry. Control participants will not be given any instructions regarding lifestyle. The primary outcome measures are activity energy expenditure (doubly labeled water) and physical activity (accelerometry). Secondary measures will include resting metabolic rate via indirect calorimetry, use of time, maximal oxygen consumption and several anthropometric and physiological measures. Outcome measures will be conducted at baseline (zero weeks), mid- and end-intervention (three and six weeks) with three (12 weeks) and six month (24 week) follow-up. All assessors will be blinded to group allocation. This protocol

  1. Ion microprobe analyses of aluminous lunar glasses - A test of the 'rock type' hypothesis

    Science.gov (United States)

    Meyer, C., Jr.

    1978-01-01

    Previous soil survey investigations found that there are natural groupings of glass compositions in lunar soils and that the average major element composition of some of these groupings is the same at widely separated lunar landing sites. This led soil survey enthusiasts to promote the hypothesis that the average composition of glass groupings represents the composition of primary lunar 'rock types'. In this investigation the trace element composition of numerous aluminous glass particles was determined by the ion microprobe method as a test of the above mentioned 'rock type' hypothesis. It was found that within any grouping of aluminous lunar glasses by major element content, there is considerable scatter in the refractory trace element content. In addition, aluminous glasses grouped by major elements were found to have different average trace element contents at different sites (Apollo 15, 16 and Luna 20). This evidence argues that natural groupings in glass compositions are determined by regolith processes and may not represent the composition of primary lunar 'rock types'.

  2. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  3. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  4. A single test for rejecting the null hypothesis in subgroups and in the overall sample.

    Science.gov (United States)

    Lin, Yunzhi; Zhou, Kefei; Ganju, Jitendra

    2017-01-01

    In clinical trials, some patient subgroups are likely to demonstrate larger effect sizes than other subgroups. For example, the effect size, or informally the benefit with treatment, is often greater in patients with a moderate condition of a disease than in those with a mild condition. A limitation of the usual method of analysis is that it does not incorporate this ordering of effect size by patient subgroup. We propose a test statistic which supplements the conventional test by including this information and simultaneously tests the null hypothesis in pre-specified subgroups and in the overall sample. It results in more power than the conventional test when the differences in effect sizes across subgroups are at least moderately large; otherwise it loses power. The method involves combining p-values from models fit to pre-specified subgroups and the overall sample in a manner that assigns greater weight to subgroups in which a larger effect size is expected. Results are presented for randomized trials with two and three subgroups.

  5. Testing the fire-sale FDI hypothesis for the European financial crisis

    NARCIS (Netherlands)

    Weitzel, G.U.; Kling, G.; Gerritsen, D.

    2014-01-01

    Using a panel of corporate transactions in 27 EU countries from 1999 to 2012, we investigate the impact of the financial crisis on the market for corporate assets. In particular, we test the ‘fire-sale FDI’ hypothesis by analyzing the number of cross-border transactions, the price of corporate

  6. Testing the Fire-Sale FDI Hypothesis for the European Financial Crisis

    NARCIS (Netherlands)

    Kling, G.; Gerritsen, Dirk; Weitzel, Gustav Utz

    2014-01-01

    Using a panel of corporate transactions in 27 EU countries from 1999 to 2012, we investigate the impact of the financial crisis on the market for corporate assets. In particular, we test the ‘fire-sale FDI’ hypothesis by analyzing the number of cross-border transactions, the price of corporate

  7. [Experimental testing of Pflüger's reflex hypothesis of menstruation in late 19th century].

    Science.gov (United States)

    Simmer, H H

    1980-07-01

    Pflüger's hypothesis of a nerve reflex as the cause of menstruation published in 1865 and accepted by many, nonetheless did not lead to experimental investigations for 25 years. According to this hypothesis the nerve reflex starts in the ovary by an increase of the intraovarian pressure by the growing follicles. In 1884 Adolph Kehrer proposed a program to test the nerve reflex, but only in 1890, Cohnstein artificially increased the intraovarian pressure in women by bimanual compression from the outside and the vagina. His results were not convincing. Six years later, Strassmann injected fluids into ovaries of animals and obtained changes in the uterus resembling those of oestrus. His results seemed to verify a prognosis derived from Pflüger's hypothesis. Thus, after a long interval, that hypothesis had become a paradigma. Though reasons can be given for the delay, it is little understood, why experimental testing started so late.

  8. Testing the implicit processing hypothesis of precognitive dream experience.

    Science.gov (United States)

    Valášek, Milan; Watt, Caroline; Hutton, Jenny; Neill, Rebecca; Nuttall, Rachel; Renwick, Grace

    2014-08-01

    Seemingly precognitive (prophetic) dreams may be a result of one's unconscious processing of environmental cues and having an implicit inference based on these cues manifest itself in one's dreams. We present two studies exploring this implicit processing hypothesis of precognitive dream experience. Study 1 investigated the relationship between implicit learning, transliminality, and precognitive dream belief and experience. Participants completed the Serial Reaction Time task and several questionnaires. We predicted a positive relationship between the variables. With the exception of relationships between transliminality and precognitive dream belief and experience, this prediction was not supported. Study 2 tested the hypothesis that differences in the ability to notice subtle cues explicitly might account for precognitive dream beliefs and experiences. Participants completed a modified version of the flicker paradigm. We predicted a negative relationship between the ability to explicitly detect changes and precognitive dream variables. This relationship was not found. There was also no relationship between precognitive dream belief and experience and implicit change detection. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Mental Abilities and School Achievement: A Test of a Mediation Hypothesis

    Science.gov (United States)

    Vock, Miriam; Preckel, Franzis; Holling, Heinz

    2011-01-01

    This study analyzes the interplay of four cognitive abilities--reasoning, divergent thinking, mental speed, and short-term memory--and their impact on academic achievement in school in a sample of adolescents in grades seven to 10 (N = 1135). Based on information processing approaches to intelligence, we tested a mediation hypothesis, which states…

  10. The Need for Nuance in the Null Hypothesis Significance Testing Debate

    Science.gov (United States)

    Häggström, Olle

    2017-01-01

    Null hypothesis significance testing (NHST) provides an important statistical toolbox, but there are a number of ways in which it is often abused and misinterpreted, with bad consequences for the reliability and progress of science. Parts of contemporary NHST debate, especially in the psychological sciences, is reviewed, and a suggestion is made…

  11. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  12. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    International Nuclear Information System (INIS)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik

    2015-01-01

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles

  13. Testing hypotheses and the advancement of science: recent attempts to falsify the equilibrium point hypothesis.

    Science.gov (United States)

    Feldman, Anatol G; Latash, Mark L

    2005-02-01

    Criticisms of the equilibrium point (EP) hypothesis have recently appeared that are based on misunderstandings of some of its central notions. Starting from such interpretations of the hypothesis, incorrect predictions are made and tested. When the incorrect predictions prove false, the hypothesis is claimed to be falsified. In particular, the hypothesis has been rejected based on the wrong assumptions that it conflicts with empirically defined joint stiffness values or that it is incompatible with violations of equifinality under certain velocity-dependent perturbations. Typically, such attempts use notions describing the control of movements of artificial systems in place of physiologically relevant ones. While appreciating constructive criticisms of the EP hypothesis, we feel that incorrect interpretations have to be clarified by reiterating what the EP hypothesis does and does not predict. We conclude that the recent claims of falsifying the EP hypothesis and the calls for its replacement by EMG-force control hypothesis are unsubstantiated. The EP hypothesis goes far beyond the EMG-force control view. In particular, the former offers a resolution for the famous posture-movement paradox while the latter fails to resolve it.

  14. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    Science.gov (United States)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The

  15. Men’s Perception of Raped Women: Test of the Sexually Transmitted Disease Hypothesis and the Cuckoldry Hypothesis

    Directory of Open Access Journals (Sweden)

    Prokop Pavol

    2016-06-01

    Full Text Available Rape is a recurrent adaptive problem of female humans and females of a number of non-human animals. Rape has various physiological and reproductive costs to the victim. The costs of rape are furthermore exaggerated by social rejection and blaming of a victim, particularly by men. The negative perception of raped women by men has received little attention from an evolutionary perspective. Across two independent studies, we investigated whether the risk of sexually transmitted diseases (the STD hypothesis, Hypothesis 1 or paternity uncertainty (the cuckoldry hypothesis, Hypothesis 2 influence the negative perception of raped women by men. Raped women received lower attractiveness score than non-raped women, especially in long-term mate attractiveness score. The perceived attractiveness of raped women was not influenced by the presence of experimentally manipulated STD cues on faces of putative rapists. Women raped by three men received lower attractiveness score than women raped by one man. These results provide stronger support for the cuckoldry hypothesis (Hypothesis 2 than for the STD hypothesis (Hypothesis 1. Single men perceived raped women as more attractive than men in a committed relationship (Hypothesis 3, suggesting that the mating opportunities mediate men’s perception of victims of rape. Overall, our results suggest that the risk of cuckoldry underlie the negative perception of victims of rape by men rather than the fear of disease transmission.

  16. Test-potentiated learning: three independent replications, a disconfirmed hypothesis, and an unexpected boundary condition.

    Science.gov (United States)

    Wissman, Kathryn T; Rawson, Katherine A

    2018-04-01

    Arnold and McDermott [(2013). Test-potentiated learning: Distinguishing between direct and indirect effects of testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 940-945] isolated the indirect effects of testing and concluded that encoding is enhanced to a greater extent following more versus fewer practice tests, referred to as test-potentiated learning. The current research provided further evidence for test-potentiated learning and evaluated the covert retrieval hypothesis as an alternative explanation for the observed effect. Learners initially studied foreign language word pairs and then completed either one or five practice tests before restudy occurred. Results of greatest interest concern performance on test trials following restudy for items that were not correctly recalled on the test trials that preceded restudy. Results replicate Arnold and McDermott (2013) by demonstrating that more versus fewer tests potentiate learning when trial time is limited. Results also provide strong evidence against the covert retrieval hypothesis concerning why the effect occurs (i.e., it does not reflect differential covert retrieval during pre-restudy trials). In addition, outcomes indicate that the magnitude of the test-potentiated learning effect decreases as trial length increases, revealing an unexpected boundary condition to test-potentiated learning.

  17. Resemblance profiles as clustering decision criteria: Estimating statistical power, error, and correspondence for a hypothesis test for multivariate structure.

    Science.gov (United States)

    Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F

    2017-04-01

    Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.

  18. Picture-Perfect Is Not Perfect for Metamemory: Testing the Perceptual Fluency Hypothesis with Degraded Images

    Science.gov (United States)

    Besken, Miri

    2016-01-01

    The perceptual fluency hypothesis claims that items that are easy to perceive at encoding induce an illusion that they will be easier to remember, despite the finding that perception does not generally affect recall. The current set of studies tested the predictions of the perceptual fluency hypothesis with a picture generation manipulation.…

  19. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  20. Why is muscularity sexy? Tests of the fitness indicator hypothesis.

    Science.gov (United States)

    Frederick, David A; Haselton, Martie G

    2007-08-01

    Evolutionary scientists propose that exaggerated secondary sexual characteristics are cues of genes that increase offspring viability or reproductive success. In six studies the hypothesis that muscularity is one such cue is tested. As predicted, women rate muscular men as sexier, more physically dominant and volatile, and less committed to their mates than nonmuscular men. Consistent with the inverted-U hypothesis of masculine traits, men with moderate muscularity are rated most attractive. Consistent with past research on fitness cues, across two measures, women indicate that their most recent short-term sex partners were more muscular than their other sex partners (ds = .36, .47). Across three studies, when controlling for other characteristics (e.g., body fat), muscular men rate their bodies as sexier to women (partial rs = .49-.62) and report more lifetime sex partners (partial rs = .20-.27), short-term partners (partial rs = .25-.28), and more affairs with mated women (partial r = .28).

  1. Statistical hypothesis tests of some micrometeorological observations

    International Nuclear Information System (INIS)

    SethuRaman, S.; Tichler, J.

    1977-01-01

    Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g 1 has a good correlation with the chi-square values. Events with vertical-barg 1 vertical-bar 1 vertical-bar<0.43 were approximately normal. Intermittency associated with the formation and breaking of internal gravity waves in surface-based inversions over water is thought to be the reason for the non-normality

  2. Planned Hypothesis Tests Are Not Necessarily Exempt From Multiplicity Adjustment

    Directory of Open Access Journals (Sweden)

    Andrew V. Frane

    2015-10-01

    Full Text Available Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are inherently unnecessary if the tests were “planned” (i.e., if the hypotheses were specified before the study began. This longstanding misconception continues to be perpetuated in textbooks and continues to be cited in journal articles to justify disregard for Type I error inflation. I critically evaluate this myth and examine its rationales and variations. To emphasize the myth’s prevalence and relevance in current research practice, I provide examples from popular textbooks and from recent literature. I also make recommendations for improving research practice and pedagogy regarding this problem and regarding multiple testing in general.

  3. Variability: A Pernicious Hypothesis.

    Science.gov (United States)

    Noddings, Nel

    1992-01-01

    The hypothesis of greater male variability in test results is discussed in its historical context, and reasons feminists have objected to the hypothesis are considered. The hypothesis acquires political importance if it is considered that variability results from biological, rather than cultural, differences. (SLD)

  4. Graphic tests of Easterlin's hypothesis: science or art?

    Science.gov (United States)

    Rutten, A; Higgs, R

    1984-01-01

    Richard Easterlin believes that the postwar fertility cycle is uniquely consistent with the hypothesis of his relative income model of fertility, yet a closer examination of his evidence shows that the case for the relative income explanation is much weaker than initially appears. Easterlin finds the postwar baby boom a transparent event. Couples who entered the labor market in the postwar period have very low material aspirations. Having grown up during the Great Depression and World War II, they were content with a modest level of living. Their labor market experience was very good. Tight restrictions on immigration kept aliens from coming in to fill the gap. Thus the members of his generation occupied an unprecedented position. They could easily meet and even exceed their expectations. This high level of relative income meant that they could have more of everything they wanted, including children. For the children born during the baby boom, all this was reversed, and hence the needs of the baby bust were sown. To test this hypothesis, Easterlin compared the movements of relative income and fertility over the postwar years using a graph. 4 published versions of the graph are presented. The graph shows that relative income and fertility did move together over the cycle, apparently very closely. Easterlin's measure of fertility is the total fertility rate (TFR). There is no such direct measure of relative income. Easterlin develops 2 proxies based on changing economic conditions believed to shape the level of material aspirations. His preferred measure, labeled R or income in his graph, relates the income experience of young couples in the years previous to marriage to that of their parents in the years before the young people left home. Because of the available data limit construction of this index to the years after 1956, another measure, labeled Re or employment in Easterlin's graphs, is constructed for the pre-1956 period. This measure relates the average of

  5. Praise the Bridge that Carries You Over: Testing the Flattery Citation Hypothesis

    DEFF Research Database (Denmark)

    Frandsen, Tove Faber; Nicolaisen, Jeppe

    2011-01-01

    analysis of the editorial board members entering American Economic Review from 1984 to 2004 using a citation window of 11 years. In order to test the flattery citation hypothesis further we have conducted a study applying the difference-in-difference estimator. We analyse the number of times the editors...

  6. Testing of Frank's hypothesis on a containerless packing of macroscopic soft spheres and comparison with mono-atomic metallic liquids

    International Nuclear Information System (INIS)

    Sahu, K.K.; Wessels, V.; Kelton, K.F.; Loeffler, J.F.

    2011-01-01

    Highlights: → Testing of Frank's hypothesis for Centripetal Packing (CP) has been proposed. → It is shown that CP is an idealized model for Monatomic Supercooled Liquid (MSL). → The CP is fit for comparing with studies on MSL in a containerless environment. → We measure local orders in CP by HA and BOO methods for the first time. → It is shown that icosahedral order is greater in CP than MSL and reasons explored. - Abstract: It is well-known that metallic liquids can exist below their equilibrium melting temperature for a considerable time. To explain this, Frank proposed that icosahedral ordering, incompatible with crystalline long-range order, is prevalent in the atomic structure of these liquids, stabilizing them and enabling them to be supercooled. Some studies of the atomic structures of metallic liquids using Beam-line Electrostatic Levitation (BESL; containerless melting), and other techniques, support this hypothesis . Here we examine Frank's hypothesis in a system of macroscopic, monodisperse deformable spheres obtained by containerless packing under the influence of centripetal force. The local structure of this packing is analyzed and compared with atomic ensembles of liquid transition metals obtained by containerless melting using the BESL method.

  7. Speech production in people who stutter: Testing the motor plan assembly hypothesis

    NARCIS (Netherlands)

    Lieshout, P.H.H.M. van; Hulstijn, W.; Peters, H.F.M.

    1996-01-01

    The main purpose of the present study was to test the hypothesis that persons who stutter, when compared to persons who do not stutter, are less able to assemble abstract motor plans for short verbal responses. Subjects were adult males who stutter and age- and sex-matched control speakers, who were

  8. Using modern human cortical bone distribution to test the systemic robusticity hypothesis.

    Science.gov (United States)

    Baab, Karen L; Copes, Lynn E; Ward, Devin L; Wells, Nora; Grine, Frederick E

    2018-06-01

    The systemic robusticity hypothesis links the thickness of cortical bone in both the cranium and limb bones. This hypothesis posits that thick cortical bone is in part a systemic response to circulating hormones, such as growth hormone and thyroid hormone, possibly related to physical activity or cold climates. Although this hypothesis has gained popular traction, only rarely has robusticity of the cranium and postcranial skeleton been considered jointly. We acquired computed tomographic scans from associated crania, femora and humeri from single individuals representing 11 populations in Africa and North America (n = 228). Cortical thickness in the parietal, frontal and occipital bones and cortical bone area in limb bone diaphyses were analyzed using correlation, multiple regression and general linear models to test the hypothesis. Absolute thickness values from the crania were not correlated with cortical bone area of the femur or humerus, which is at odds with the systemic robusticity hypothesis. However, measures of cortical bone scaled by total vault thickness and limb cross-sectional area were positively correlated between the cranium and postcranium. When accounting for a range of potential confounding variables, including sex, age and body mass, variation in relative postcranial cortical bone area explained ∼20% of variation in the proportion of cortical cranial bone thickness. While these findings provide limited support for the systemic robusticity hypothesis, cranial cortical thickness did not track climate or physical activity across populations. Thus, some of the variation in cranial cortical bone thickness in modern humans is attributable to systemic effects, but the driving force behind this effect remains obscure. Moreover, neither absolute nor proportional measures of cranial cortical bone thickness are positively correlated with total cranial bone thickness, complicating the extrapolation of these findings to extinct species where only cranial

  9. Aging and motor variability: a test of the neural noise hypothesis.

    Science.gov (United States)

    Sosnoff, Jacob J; Newell, Karl M

    2011-07-01

    Experimental tests of the neural noise hypothesis of aging, which holds that aging-related increments in motor variability are due to increases in white noise in the perceptual-motor system, were conducted. Young (20-29 years old) and old (60-69 and 70-79 years old) adults performed several perceptual-motor tasks. Older adults were progressively more variable in their performance outcome, but there was no age-related difference in white noise in the motor output. Older adults had a greater frequency-dependent structure in their motor variability that was associated with performance decrements. The findings challenge the main tenet of the neural noise hypothesis of aging in that the increased variability of older adults was due to a decreased ability to adapt to the constraints of the task rather than an increment of neural noise per se.

  10. Balassa-Samuelson Hypothesis: A Test Of Turkish Economy By ARDL Bound Testing Approach

    Directory of Open Access Journals (Sweden)

    Utku ALTUNÖZ

    2014-06-01

    Full Text Available Balassa-Samuelson effect is a popular theme at last years that introduced by Bèla Balassa (1964 and Paul Samuelson (1964. This concept, suggests that a differentiation at international level between the relative rates of productivity of the tradable and non tradable sectors may cause structural and permanent deviations from the purchasing power parity. In this essay, related variables are tested through Balassa-Samuelson Effect in terms of Turkey-European Economy. The choice of econometric technique used to estimate the model was important because the regressors in the model appeared to be a mixture of I(0 and I(1 processes. Thus ARDL bounds testing approaches to co integration analysis in estimating the long-run determinants of the real exchange rates. Given the dataset and econometric techniques used, the results do not support the B-S hypothesis.

  11. Mirror bootstrap method for testing hypotheses of one mean

    OpenAIRE

    Varvak, Anna

    2012-01-01

    The general philosophy for bootstrap or permutation methods for testing hypotheses is to simulate the variation of the test statistic by generating the sampling distribution which assumes both that the null hypothesis is true, and that the data in the sample is somehow representative of the population. This philosophy is inapplicable for testing hypotheses for a single parameter like the population mean, since the two assumptions are contradictory (e.g., how can we assume both that the mean o...

  12. [A test of the focusing hypothesis for category judgment: an explanation using the mental-box model].

    Science.gov (United States)

    Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi

    2011-06-01

    This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.

  13. A test of the herbivore optimization hypothesis using muskoxen and a graminoid meadow plant community

    Directory of Open Access Journals (Sweden)

    David L. Smith

    1996-01-01

    Full Text Available A prediction from the herbivore optimization hypothesis is that grazing by herbivores at moderate intensities will increase net above-ground primary productivity more than at lower or higher intensities. I tested this hypothesis in an area of high muskox {Ovibos moschatus density on north-central Banks Island, Northwest Territories, Canada (73°50'N, 119°53'W. Plots (1 m2 in graminoid meadows dominated by cottongrass (Eriophorum triste were either clipped, exposed to muskoxen, protected for part of one growing season, or permanently protected. This resulted in the removal of 22-44%, 10-39%, 0-39% or 0%, respectively, of shoot tissue during each growing season. Contrary to the predictions of the herbivore optimization hypothesis, productivity did not increase across this range of tissue removal. Productivity of plants clipped at 1.5 cm above ground once or twice per growing season, declined by 60+/-5% in 64% of the tests. The productivity of plants grazed by muskoxen declined by 56+/-7% in 25% of the tests. No significant change in productivity was observed in 36% and 75% of the tests in clipped and grazed treatments, respecrively. Clipping and grazing reduced below-ground standing crop except where removals were small. Grazing and clipping did not stimulate productivity of north-central Banks Island graminoid meadows.

  14. Paranormal psychic believers and skeptics: a large-scale test of the cognitive differences hypothesis.

    Science.gov (United States)

    Gray, Stephen J; Gallo, David A

    2016-02-01

    Belief in paranormal psychic phenomena is widespread in the United States, with over a third of the population believing in extrasensory perception (ESP). Why do some people believe, while others are skeptical? According to the cognitive differences hypothesis, individual differences in the way people process information about the world can contribute to the creation of psychic beliefs, such as differences in memory accuracy (e.g., selectively remembering a fortune teller's correct predictions) or analytical thinking (e.g., relying on intuition rather than scrutinizing evidence). While this hypothesis is prevalent in the literature, few have attempted to empirically test it. Here, we provided the most comprehensive test of the cognitive differences hypothesis to date. In 3 studies, we used online screening to recruit groups of strong believers and strong skeptics, matched on key demographics (age, sex, and years of education). These groups were then tested in laboratory and online settings using multiple cognitive tasks and other measures. Our cognitive testing showed that there were no consistent group differences on tasks of episodic memory distortion, autobiographical memory distortion, or working memory capacity, but skeptics consistently outperformed believers on several tasks tapping analytical or logical thinking as well as vocabulary. These findings demonstrate cognitive similarities and differences between these groups and suggest that differences in analytical thinking and conceptual knowledge might contribute to the development of psychic beliefs. We also found that psychic belief was associated with greater life satisfaction, demonstrating benefits associated with psychic beliefs and highlighting the role of both cognitive and noncognitive factors in understanding these individual differences.

  15. A test of the predator satiation hypothesis, acorn predator size, and acorn preference

    Science.gov (United States)

    C.H. Greenberg; S.J. Zarnoch

    2018-01-01

    Mast seeding is hypothesized to satiate seed predators with heavy production and reduce populations with crop failure, thereby increasing seed survival. Preference for red or white oak acorns could influence recruitment among oak species. We tested the predator satiation hypothesis, acorn preference, and predator size by concurrently...

  16. Persistent Confusions about Hypothesis Testing in the Social Sciences

    Directory of Open Access Journals (Sweden)

    Christopher Thron

    2015-05-01

    Full Text Available This paper analyzes common confusions involving basic concepts in statistical hypothesis testing. One-third of the social science statistics textbooks examined in the study contained false statements about significance level and/or p-value. We infer that a large proportion of social scientists are being miseducated about these concepts. We analyze the causes of these persistent misunderstandings, and conclude that the conventional terminology is prone to abuse because it does not clearly represent the conditional nature of probabilities and events involved. We argue that modifications in terminology, as well as the explicit introduction of conditional probability concepts and notation into the statistics curriculum in the social sciences, are necessary to prevent the persistence of these errors.

  17. Why Is Test-Restudy Practice Beneficial for Memory? An Evaluation of the Mediator Shift Hypothesis

    Science.gov (United States)

    Pyc, Mary A.; Rawson, Katherine A.

    2012-01-01

    Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness…

  18. Convergence Hypothesis: Evidence from Panel Unit Root Test with Spatial Dependence

    Directory of Open Access Journals (Sweden)

    Lezheng Liu

    2006-10-01

    Full Text Available In this paper we test the convergence hypothesis by using a revised 4- step procedure of panel unit root test suggested by Evans and Karras (1996. We use data on output for 24 OECD countries over 40 years long. Whether the convergence, if any, is conditional or absolute is also examined. According to a proposition by Baltagi, Bresson, and Pirotte (2005, we incorporate spatial autoregressive error into a fixedeffect panel model to account for not only the heterogeneous panel structure, but also spatial dependence, which might induce lower statistical power of conventional panel unit root test. Our empirical results indicate that output is converging among OECD countries. However, convergence is characterized as conditional. The results also report a relatively lower convergent speed compared to conventional panel studies.

  19. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    Science.gov (United States)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  20. TESTS OF THE PLANETARY HYPOTHESIS FOR PTFO 8-8695b

    International Nuclear Information System (INIS)

    Yu, Liang; Winn, Joshua N.; Rappaport, Saul; Dai, Fei; Triaud, Amaury H. M. J.; Gillon, Michaël; Delrez, Laetitia; Jehin, Emmanuel; Lendl, Monika; Albrecht, Simon; Bieryla, Allyson; Holman, Matthew J.; Montet, Benjamin T.; Hillenbrand, Lynne; Howard, Andrew W.; Huang, Chelsea X.; Isaacson, Howard; Sanchis-Ojeda, Roberto; Muirhead, Philip

    2015-01-01

    The T Tauri star PTFO 8-8695 exhibits periodic fading events that have been interpreted as the transits of a giant planet on a precessing orbit. Here we present three tests of the planet hypothesis. First, we sought evidence for the secular changes in light-curve morphology that are predicted to be a consequence of orbital precession. We observed 28 fading events spread over several years and did not see the expected changes. Instead, we found that the fading events are not strictly periodic. Second, we attempted to detect the planet's radiation, based on infrared observations spanning the predicted times of occultations. We ruled out a signal of the expected amplitude. Third, we attempted to detect the Rossiter–McLaughlin effect by performing high-resolution spectroscopy throughout a fading event. No effect was seen at the expected level, ruling out most (but not all) possible orientations for the hypothetical planetary orbit. Our spectroscopy also revealed strong, time-variable, high-velocity Hα and Ca H and K emission features. All these observations cast doubt on the planetary hypothesis, and suggest instead that the fading events represent starspots, eclipses by circumstellar dust, or occultations of an accretion hotspot

  1. Fluorescence Microspectroscopy for Testing the Dimerization Hypothesis of BACE1 Protein in Cultured HEK293 Cells

    Science.gov (United States)

    Gardeen, Spencer; Johnson, Joseph L.; Heikal, Ahmed A.

    2016-06-01

    Alzheimer's Disease (AD) is a neurodegenerative disorder that results from the formation of beta-amyloid plaques in the brain that trigger the known symptoms of memory loss in AD patients. The beta-amyloid plaques are formed by the proteolytic cleavage of the amyloid precursor protein (APP) by the proteases BACE1 and gamma-secretase. These enzyme-facilitated cleavages lead to the production of beta-amyloid fragments that aggregate to form plaques, which ultimately lead to neuronal cell death. Recent detergent protein extraction studies suggest that BACE1 protein forms a dimer that has significantly higher catalytic activity than its monomeric counterpart. In this contribution, we examine the dimerization hypothesis of BACE1 in cultured HEK293 cells using complementary fluorescence spectroscopy and microscopy methods. Cells were transfected with a BACE1-EGFP fusion protein construct and imaged using confocal, and differential interference contrast to monitor the localization and distribution of intracellular BACE1. Complementary fluorescence lifetime and anisotropy measurements enabled us to examine the conformational and environmental changes of BACE1 as a function of substrate binding. Using fluorescence correlation spectroscopy, we also quantified the diffusion coefficient of BACE1-EGFP on the plasma membrane as a means to test the dimerization hypothesis as a fucntion of substrate-analog inhibitition. Our results represent an important first towards examining the substrate-mediated dimerization hypothesis of BACE1 in live cells.

  2. The self: Your own worst enemy? A test of the self-invoking trigger hypothesis.

    Science.gov (United States)

    McKay, Brad; Wulf, Gabriele; Lewthwaite, Rebecca; Nordin, Andrew

    2015-01-01

    The self-invoking trigger hypothesis was proposed by Wulf and Lewthwaite [Wulf, G., & Lewthwaite, R. (2010). Effortless motor learning? An external focus of attention enhances movement effectiveness and efficiency. In B. Bruya (Ed.), Effortless attention: A new perspective in attention and action (pp. 75-101). Cambridge, MA: MIT Press] as a mechanism underlying the robust effect of attentional focus on motor learning and performance. One component of this hypothesis, relevant beyond the attentional focus effect, suggests that causing individuals to access their self-schema will negatively impact their learning and performance of a motor skill. The purpose of the present two studies was to provide an initial test of the performance and learning aspects of the self-invoking trigger hypothesis by asking participants in one group to think about themselves between trial blocks-presumably activating their self-schema-to compare their performance and learning to that of a control group. In Experiment 1, participants performed 2 blocks of 10 trials on a throwing task. In one condition, participants were asked between blocks to think about their past throwing experience. While a control group maintained their performance across blocks, the self group's performance was degraded on the second block. In Experiment 2, participants were asked to practice a wiffleball hitting task on two separate days. Participants returned on a third day to perform retention and transfer tests without the self-activating manipulation. Results indicated that the self group learned the hitting task less effectively than the control group. The findings reported here provide initial support for the self-invoking trigger hypothesis.

  3. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  4. THE FRACTAL MARKET HYPOTHESIS

    OpenAIRE

    FELICIA RAMONA BIRAU

    2012-01-01

    In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and...

  5. Habitat fragmentation, vole population fluctuations, and the ROMPA hypothesis: An experimental test using model landscapes.

    Science.gov (United States)

    Batzli, George O

    2016-11-01

    Increased habitat fragmentation leads to smaller size of habitat patches and to greater distance between patches. The ROMPA hypothesis (ratio of optimal to marginal patch area) uniquely links vole population fluctuations to the composition of the landscape. It states that as ROMPA decreases (fragmentation increases), vole population fluctuations will increase (including the tendency to display multi-annual cycles in abundance) because decreased proportions of optimal habitat result in greater population declines and longer recovery time after a harsh season. To date, only comparative observations in the field have supported the hypothesis. This paper reports the results of the first experimental test. I used prairie voles, Microtus ochrogaster, and mowed grassland to create model landscapes with 3 levels of ROMPA (high with 25% mowed, medium with 50% mowed and low with 75% mowed). As ROMPA decreased, distances between patches of favorable habitat (high cover) increased owing to a greater proportion of unfavorable (mowed) habitat. Results from the first year with intensive live trapping indicated that the preconditions for operation of the hypothesis existed (inversely density dependent emigration and, as ROMPA decreased, increased per capita mortality and decreased per capita movement between optimal patches). Nevertheless, contrary to the prediction of the hypothesis that populations in landscapes with high ROMPA should have the lowest variability, 5 years of trapping indicated that variability was lowest with medium ROMPA. The design of field experiments may never be perfect, but these results indicate that the ROMPA hypothesis needs further rigorous testing. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  6. Combining multiple hypothesis testing and affinity propagation clustering leads to accurate, robust and sample size independent classification on gene expression data

    Directory of Open Access Journals (Sweden)

    Sakellariou Argiris

    2012-10-01

    Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.

  7. Predictability of Exchange Rates in Sri Lanka: A Test of the Efficient Market Hypothesis

    OpenAIRE

    Guneratne B Wickremasinghe

    2007-01-01

    This study examined the validity of the weak and semi-strong forms of the efficient market hypothesis (EMH) for the foreign exchange market of Sri Lanka. Monthly exchange rates for four currencies during the floating exchange rate regime were used in the empirical tests. Using a battery of tests, empirical results indicate that the current values of the four exchange rates can be predicted from their past values. Further, the tests of semi-strong form efficiency indicate that exchange rate pa...

  8. Parameter estimation and hypothesis testing in linear models

    CERN Document Server

    Koch, Karl-Rudolf

    1999-01-01

    The necessity to publish the second edition of this book arose when its third German edition had just been published. This second English edition is there­ fore a translation of the third German edition of Parameter Estimation and Hypothesis Testing in Linear Models, published in 1997. It differs from the first English edition by the addition of a new chapter on robust estimation of parameters and the deletion of the section on discriminant analysis, which has been more completely dealt with by the author in the book Bayesian In­ ference with Geodetic Applications, Springer-Verlag, Berlin Heidelberg New York, 1990. Smaller additions and deletions have been incorporated, to im­ prove the text, to point out new developments or to eliminate errors which became apparent. A few examples have been also added. I thank Springer-Verlag for publishing this second edition and for the assistance in checking the translation, although the responsibility of errors remains with the author. I also want to express my thanks...

  9. Visual Working Memory and Number Sense: Testing the Double Deficit Hypothesis in Mathematics

    Science.gov (United States)

    Toll, Sylke W. M.; Kroesbergen, Evelyn H.; Van Luit, Johannes E. H.

    2016-01-01

    Background: Evidence exists that there are two main underlying cognitive factors in mathematical difficulties: working memory and number sense. It is suggested that real math difficulties appear when both working memory and number sense are weak, here referred to as the double deficit (DD) hypothesis. Aims: The aim of this study was to test the DD…

  10. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  11. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  12. Hypothesis Testing of Inclusion of the Tolerance Interval for the Assessment of Food Safety.

    Directory of Open Access Journals (Sweden)

    Hungyen Chen

    Full Text Available In the testing of food quality and safety, we contrast the contents of the newly proposed food (genetically modified food against those of conventional foods. Because the contents vary largely between crop varieties and production environments, we propose a two-sample test of substantial equivalence that examines the inclusion of the tolerance intervals of the two populations, the population of the contents of the proposed food, which we call the target population, and the population of the contents of the conventional food, which we call the reference population. Rejection of the test hypothesis guarantees that the contents of the proposed foods essentially do not include outliers in the population of the contents of the conventional food. The existing tolerance interval (TI0 is constructed to have at least a pre-specified level of the coverage probability. Here, we newly introduce the complementary tolerance interval (TI1 that is guaranteed to have at most a pre-specified level of the coverage probability. By applying TI0 and TI1 to the samples from the target population and the reference population respectively, we construct a test statistic for testing inclusion of the two tolerance intervals. To examine the performance of the testing procedure, we conducted a simulation that reflects the effects of gene and environment, and residual from a crop experiment. As a case study, we applied the hypothesis testing to test if the distribution of the protein content of rice in Kyushu area is included in the distribution of the protein content in the other areas in Japan.

  13. Testing the null hypothesis of the nonexistence of a preseizure state

    International Nuclear Information System (INIS)

    Andrzejak, Ralph G.; Kraskov, Alexander; Mormann, Florian; Rieke, Christoph; Kreuz, Thomas; Elger, Christian E.; Lehnertz, Klaus

    2003-01-01

    A rapidly growing number of studies deals with the prediction of epileptic seizures. For this purpose, various techniques derived from linear and nonlinear time series analysis have been applied to the electroencephalogram of epilepsy patients. In none of these works, however, the performance of the seizure prediction statistics is tested against a null hypothesis, an otherwise ubiquitous concept in science. In consequence, the evaluation of the reported performance values is problematic. Here, we propose the technique of seizure time surrogates based on a Monte Carlo simulation to remedy this deficit

  14. Testing the null hypothesis of the nonexistence of a preseizure state

    Energy Technology Data Exchange (ETDEWEB)

    Andrzejak, Ralph G; Kraskov, Alexander [John-von-Neumann Institute for Computing, Forschungszentrum Juelich, 52425 Juelich (Germany); Mormann, Florian; Rieke, Christoph [Department of Epileptology, University of Bonn, Sigmund-Freud-Strasse 25, 53105 Bonn (Germany); Helmholtz Institut fuer Strahlen- und Kernphysik, University of Bonn, Nussallee 14-16, 53115 Bonn (Germany); Kreuz, Thomas [John-von-Neumann Institute for Computing, Forschungszentrum Juelich, 52425 Juelich (Germany); Department of Epileptology, University of Bonn, Sigmund-Freud-Strasse 25, 53105 Bonn (Germany); Elger, Christian E; Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Strasse 25, 53105 Bonn (Germany)

    2003-01-01

    A rapidly growing number of studies deals with the prediction of epileptic seizures. For this purpose, various techniques derived from linear and nonlinear time series analysis have been applied to the electroencephalogram of epilepsy patients. In none of these works, however, the performance of the seizure prediction statistics is tested against a null hypothesis, an otherwise ubiquitous concept in science. In consequence, the evaluation of the reported performance values is problematic. Here, we propose the technique of seizure time surrogates based on a Monte Carlo simulation to remedy this deficit.

  15. Whiplash and the compensation hypothesis.

    Science.gov (United States)

    Spearing, Natalie M; Connelly, Luke B

    2011-12-01

    Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.

  16. Biomic specialization and speciation rates in ruminants (Cetartiodactyla, Mammalia): a test of the resource-use hypothesis at the global scale.

    Science.gov (United States)

    Cantalapiedra, Juan L; Hernández Fernández, Manuel; Morales, Jorge

    2011-01-01

    The resource-use hypothesis proposed by E.S. Vrba predicts that specialist species have higher speciation and extinction rates than generalists because they are more susceptible to environmental changes and vicariance. In this work, we test some of the predictions derived from this hypothesis on the 197 extant and recently extinct species of Ruminantia (Cetartiodactyla, Mammalia) using the biomic specialization index (BSI) of each species, which is based on its distribution within different biomes. We ran 10000 Monte Carlo simulations of our data in order to get a null distribution of BSI values against which to contrast the observed data. Additionally, we drew on a supertree of the ruminants and a phylogenetic likelihood-based method (QuaSSE) for testing whether the degree of biomic specialization affects speciation rates in ruminant lineages. Our results are consistent with the predictions of the resource-use hypothesis, which foretells a higher speciation rate of lineages restricted to a single biome (BSI = 1) and higher frequency of specialist species in biomes that underwent high degree of contraction and fragmentation during climatic cycles. Bovids and deer present differential specialization across biomes; cervids show higher specialization in biomes with a marked hydric seasonality (tropical deciduous woodlands and schlerophyllous woodlands), while bovids present higher specialization in a greater variety of biomes. This might be the result of divergent physiological constraints as well as a different biogeographic and evolutionary history.

  17. THE FRACTAL MARKET HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    FELICIA RAMONA BIRAU

    2012-05-01

    Full Text Available In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and of course, the manner in which they interpret that information may be different. Also, Fractal Market Hypothesis refers to the way that liquidity and investment horizons influence the behaviour of financial investors.

  18. Method of constructing a fundamental equation of state based on a scaling hypothesis

    Science.gov (United States)

    Rykov, V. A.; Rykov, S. V.; Kudryavtseva, I. V.; Sverdlov, A. V.

    2017-11-01

    The work studies the issues associated with the construction of the equation of state (EOS) taking due account of substance behavior in the critical region and associated with the scaling theory of critical phenomena (ST). The authors have developed a new version of the scaling hypothesis; this approach uses the following: a) substance equation of state having a form of a Schofield-Litster-Ho linear model (LM) and b) the Benedek hypothesis. The Benedek hypothesis has found a similar behavior character for a number of properties (isochoric and isobaric heat capacities, isothermal compressibility coefficient) at critical and near-critical isochors in the vicinity of the critical point. A method is proposed to build the fundamental equation of state (FEOS) which satisfies the ST power laws. The FEOS building method is verified by building the equation of state for argon within the state parameters range: up to 1000 MPa in terms of pressure, and from 83.056 К to 13000 К in terms of temperature. The executed comparison with the fundamental equations of state of Stewart-Jacobsen (1989), of Kozlov at al (1996), of Tegeler-Span-Wagner (1999), of has shown that the FEOS describes the known experimental data with an essentially lower error.

  19. Phi index: a new metric to test the flush early and avoid the rush hypothesis.

    Science.gov (United States)

    Samia, Diogo S M; Blumstein, Daniel T

    2014-01-01

    Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.

  20. Phi index: a new metric to test the flush early and avoid the rush hypothesis.

    Directory of Open Access Journals (Sweden)

    Diogo S M Samia

    Full Text Available Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD, and its flight initiation distance (the distance at which it flees the approaching predator, FID. However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ, a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis. Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship. Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.

  1. Does mediator use contribute to the spacing effect for cued recall? Critical tests of the mediator hypothesis.

    Science.gov (United States)

    Morehead, Kayla; Dunlosky, John; Rawson, Katherine A; Bishop, Melissa; Pyc, Mary A

    2018-04-01

    When study is spaced across sessions (versus massed within a single session), final performance is greater after spacing. This spacing effect may have multiple causes, and according to the mediator hypothesis, part of the effect can be explained by the use of mediator-based strategies. This hypothesis proposes that when study is spaced across sessions, rather than massed within a session, more mediators will be generated that are longer lasting and hence more mediators will be available to support criterion recall. In two experiments, participants were randomly assigned to study paired associates using either a spaced or massed schedule. They reported strategy use for each item during study trials and during the final test. Consistent with the mediator hypothesis, participants who had spaced (as compared to massed) practice reported using more mediators on the final test. This use of effective mediators also statistically accounted for some - but not all of - the spacing effect on final performance.

  2. Contrast class cues and performance facilitation in a hypothesis-testing task: evidence for an iterative counterfactual model.

    Science.gov (United States)

    Gale, Maggie; Ball, Linden J

    2012-04-01

    Hypothesis-testing performance on Wason's (Quarterly Journal of Experimental Psychology 12:129-140, 1960) 2-4-6 task is typically poor, with only around 20% of participants announcing the to-be-discovered "ascending numbers" rule on their first attempt. Enhanced solution rates can, however, readily be observed with dual-goal (DG) task variants requiring the discovery of two complementary rules, one labeled "DAX" (the standard "ascending numbers" rule) and the other labeled "MED" ("any other number triples"). Two DG experiments are reported in which we manipulated the usefulness of a presented MED exemplar, where usefulness denotes cues that can establish a helpful "contrast class" that can stand in opposition to the presented 2-4-6 DAX exemplar. The usefulness of MED exemplars had a striking facilitatory effect on DAX rule discovery, which supports the importance of contrast-class information in hypothesis testing. A third experiment ruled out the possibility that the useful MED triple seeded the correct rule from the outset and obviated any need for hypothesis testing. We propose that an extension of Oaksford and Chater's (European Journal of Cognitive Psychology 6:149-169, 1994) iterative counterfactual model can neatly capture the mechanisms by which DG facilitation arises.

  3. Life shocks and crime: a test of the "turning point" hypothesis.

    Science.gov (United States)

    Corman, Hope; Noonan, Kelly; Reichman, Nancy E; Schwartz-Soicher, Ofira

    2011-08-01

    Other researchers have posited that important events in men's lives-such as employment, marriage, and parenthood-strengthen their social ties and lead them to refrain from crime. A challenge in empirically testing this hypothesis has been the issue of self-selection into life transitions. This study contributes to this literature by estimating the effects of an exogenous life shock on crime. We use data from the Fragile Families and Child Wellbeing Study, augmented with information from hospital medical records, to estimate the effects of the birth of a child with a severe health problem on the likelihood that the infant's father engages in illegal activities. We conduct a number of auxiliary analyses to examine exogeneity assumptions. We find that having an infant born with a severe health condition increases the likelihood that the father is convicted of a crime in the three-year period following the birth of the child, and at least part of the effect appears to operate through work and changes in parental relationships. These results provide evidence that life events can cause crime and, as such, support the "turning point" hypothesis.

  4. Gratitude facilitates private conformity: A test of the social alignment hypothesis.

    Science.gov (United States)

    Ng, Jomel W X; Tong, Eddie M W; Sim, Dael L Y; Teo, Samantha W Y; Loy, Xingqi; Giesbrecht, Timo

    2017-03-01

    Past research has established clear support for the prosocial function of gratitude in improving the well-being of others. The present research provides evidence for another hypothesized function of gratitude: the social alignment function, which enhances the tendency of grateful individuals to follow social norms. We tested the social alignment hypothesis of gratitude in 2 studies with large samples. Using 2 different conformity paradigms, participants were subjected to a color judgment task (Experiment 1) and a material consumption task (Experiment 2). They were provided with information showing choices allegedly made by others, but were allowed to state their responses in private. Supporting the social alignment hypothesis, the results showed that induced gratitude increased private conformity. Specifically, participants induced to feel gratitude were more likely to conform to the purportedly popular choice, even if the option was factually incorrect (Experiment 1). This effect appears to be specific to gratitude; induction of joy produced significantly less conformity than gratitude (Experiment 2). We discuss whether the social alignment function provides a behavioral pathway in the role of gratitude in building social relationships. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  6. On the Keyhole Hypothesis

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare B.; Kidmose, Preben; Hansen, Lars Kai

    2017-01-01

    simultaneously recorded scalp EEG. A cross-validation procedure was employed to ensure unbiased estimates. We present several pieces of evidence in support of the keyhole hypothesis: There is a high mutual information between data acquired at scalp electrodes and through the ear-EEG "keyhole," furthermore we......We propose and test the keyhole hypothesis that measurements from low dimensional EEG, such as ear-EEG reflect a broadly distributed set of neural processes. We formulate the keyhole hypothesis in information theoretical terms. The experimental investigation is based on legacy data consisting of 10...

  7. Reassessing the Trade-off Hypothesis

    DEFF Research Database (Denmark)

    Rosas, Guillermo; Manzetti, Luigi

    2015-01-01

    Do economic conditions drive voters to punish politicians that tolerate corruption? Previous scholarly work contends that citizens in young democracies support corrupt governments that are capable of promoting good economic outcomes, the so-called trade-off hypothesis. We test this hypothesis based...

  8. Use of supernovae light curves for testing the expansion hypothesis and other cosmological relations

    International Nuclear Information System (INIS)

    Rust, B.W.

    1974-01-01

    This thesis is primarily concerned with a test of the expansion hypothesis based on the relation Δt/sub obs/ = (1 + V/sub r//c)Δt/sub int/ where Δt/sub int/ is the time lapse characterizing some phenomenon in a distant galaxy, Δt/sub obs/ is the observed time lapse and V/sub r/ is the symbolic velocity of recession. If the red shift is a Doppler effect, the observed time lapse should be lengthened by the same factor as the wave length of the light. Many authors have suggested type I supernovae for such a test because of their great luminosity and the uniformity of their light curves, but apparently the test has heretofore never actually been performed. Thirty-six light curves were gathered from the literature and one (SN1971i) was measured. All of the light curves were reduced to a common (m/sub pg/) photometric system. The comparison time lapse, Δt/sub c/, was taken to be the time required for the brightness to fall from 0.5 m below peak to 2.5 m below peak. The straight line regression of Δt/sub c/ on V/sub r/ gives a correlation coefficient significant at the 93 percent level, and the simple static Euclidean hypothesis is rejected at that level. The regression line also deviates from the prediction of the classical expansion hypothesis. Better agreement was obtained using the chronogeometric theory of I. E. Segal ( []972 Astron. and Astrophys. 18, 143), but the scatter in the present data makes it impossible to distinguish between these alternate hypotheses at the 95 percent confidence level. The question of how many additional light curves would be needed to give definite tests is addressed. It is shown that at the present rate of supernova discoveries, only a few more years would be required to obtain the necessary data if light curves are systematically measured for the more distant supernovae. (Diss. Abstr. Int., B)

  9. Life Origination Hydrate Hypothesis (LOH-Hypothesis

    Directory of Open Access Journals (Sweden)

    Victor Ostrovskii

    2012-01-01

    Full Text Available The paper develops the Life Origination Hydrate Hypothesis (LOH-hypothesis, according to which living-matter simplest elements (LMSEs, which are N-bases, riboses, nucleosides, nucleotides, DNA- and RNA-like molecules, amino-acids, and proto-cells repeatedly originated on the basis of thermodynamically controlled, natural, and inevitable processes governed by universal physical and chemical laws from CH4, niters, and phosphates under the Earth's surface or seabed within the crystal cavities of the honeycomb methane-hydrate structure at low temperatures; the chemical processes passed slowly through all successive chemical steps in the direction that is determined by a gradual decrease in the Gibbs free energy of reacting systems. The hypothesis formulation method is based on the thermodynamic directedness of natural movement and consists ofan attempt to mentally backtrack on the progression of nature and thus reveal principal milestones alongits route. The changes in Gibbs free energy are estimated for different steps of the living-matter origination process; special attention is paid to the processes of proto-cell formation. Just the occurrence of the gas-hydrate periodic honeycomb matrix filled with LMSEs almost completely in its final state accounts for size limitation in the DNA functional groups and the nonrandom location of N-bases in the DNA chains. The slowness of the low-temperature chemical transformations and their “thermodynamic front” guide the gross process of living matter origination and its successive steps. It is shown that the hypothesis is thermodynamically justified and testable and that many observed natural phenomena count in its favor.

  10. Statistical hypothesis testing and common misinterpretations: Should we abandon p-value in forensic science applications?

    Science.gov (United States)

    Taroni, F; Biedermann, A; Bozza, S

    2016-02-01

    Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods

    Directory of Open Access Journals (Sweden)

    Ishwar Joshi

    2018-03-01

    Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.

  12. RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS

    Directory of Open Access Journals (Sweden)

    Nicolae-Marius JULA

    2017-05-01

    Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.

  13. The Effects of Social Anxiety and State Anxiety on Visual Attention: Testing the Vigilance-Avoidance Hypothesis.

    Science.gov (United States)

    Singh, J Suzanne; Capozzoli, Michelle C; Dodd, Michael D; Hope, Debra A

    2015-01-01

    A growing theoretical and research literature suggests that trait and state social anxiety can predict attentional patterns in the presence of emotional stimuli. The current study adds to this literature by examining the effects of state anxiety on visual attention and testing the vigilance-avoidance hypothesis, using a method of continuous visual attentional assessment. Participants were 91 undergraduate college students with high or low trait fear of negative evaluation (FNE), a core aspect of social anxiety, who were randomly assigned to either a high or low state anxiety condition. Participants engaged in a free view task in which pairs of emotional facial stimuli were presented and eye movements were continuously monitored. Overall, participants with high FNE avoided angry stimuli and participants with high state anxiety attended to positive stimuli. Participants with high state anxiety and high FNE were avoidant of angry faces, whereas participants with low state and low FNE exhibited a bias toward angry faces. The study provided partial support for the vigilance-avoidance hypothesis. The findings add to the mixed results in the literature that suggest that both positive and negative emotional stimuli may be important in understanding the complex attention patterns associated with social anxiety. Clinical implications and suggestions for future research are discussed.

  14. Further Evidence on the Weak and Strong Versions of the Screening Hypothesis in Greece.

    Science.gov (United States)

    Lambropoulos, Haris S.

    1992-01-01

    Uses Greek data for 1981 and 1985 to test screening hypothesis by replicating method proposed by Psacharopoulos. Credentialism, or sheepskin effect of education, directly challenges human capital theory, which views education as a productivity augmenting process. Results do not support the strong version of the screening hypothesis and suggest…

  15. Assessment of resampling methods for causality testing: A note on the US inflation behavior

    Science.gov (United States)

    Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870

  16. Assessment of resampling methods for causality testing: A note on the US inflation behavior.

    Science.gov (United States)

    Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.

  17. Test of the hypothesis; a lymphoma stem cells exist which is capable of self-renewal

    DEFF Research Database (Denmark)

    Kjeldsen, Malene Krag

      Test of the hypothesis; a lymphoma stem cell exist which is capable of self-renewal   Malene Krag Pedersen, Karen Dybkaer, Hans E. Johnsen   The Research Laboratory, Department of Haematology, Aalborg Hospital, Århus University   Failure of current therapeutics in the treatment of diffuse large B...... and sustaining cells(1-3). My project is based on studies of stem and early progenitor cells in lymphoid cell lines from patients with advanced DLBCL. The cell lines are world wide recognised and generously provided by Dr. Hans Messner and colleagues.   Hypothesis and aims: A lymphoma stem and progenitor cell...

  18. A test of the thermal melanism hypothesis in the wingless grasshopper Phaulacridium vittatum.

    Science.gov (United States)

    Harris, Rebecca M; McQuillan, Peter; Hughes, Lesley

    2013-01-01

    Altitudinal clines in melanism are generally assumed to reflect the fitness benefits resulting from thermal differences between colour morphs, yet differences in thermal quality are not always discernible. The intra-specific application of the thermal melanism hypothesis was tested in the wingless grasshopper Phaulacridium vittatum (Sjöstedt) (Orthoptera: Acrididae) first by measuring the thermal properties of the different colour morphs in the laboratory, and second by testing for differences in average reflectance and spectral characteristics of populations along 14 altitudinal gradients. Correlations between reflectance, body size, and climatic variables were also tested to investigate the underlying causes of clines in melanism. Melanism in P. vittatum represents a gradation in colour rather than distinct colour morphs, with reflectance ranging from 2.49 to 5.65%. In unstriped grasshoppers, darker morphs warmed more rapidly than lighter morphs and reached a higher maximum temperature (lower temperature excess). In contrast, significant differences in thermal quality were not found between the colour morphs of striped grasshoppers. In support of the thermal melanism hypothesis, grasshoppers were, on average, darker at higher altitudes, there were differences in the spectral properties of brightness and chroma between high and low altitudes, and temperature variables were significant influences on the average reflectance of female grasshoppers. However, altitudinal gradients do not represent predictable variation in temperature, and the relationship between melanism and altitude was not consistent across all gradients. Grasshoppers generally became darker at altitudes above 800 m a.s.l., but on several gradients reflectance declined with altitude and then increased at the highest altitude.

  19. Life Shocks and Crime: A Test of the “Turning Point” Hypothesis

    Science.gov (United States)

    Noonan, Kelly; Reichman, Nancy E.; Schwartz-Soicher, Ofira

    2012-01-01

    Other researchers have posited that important events in men’s lives—such as employment, marriage, and parenthood—strengthen their social ties and lead them to refrain from crime. A challenge in empirically testing this hypothesis has been the issue of self-selection into life transitions. This study contributes to this literature by estimating the effects of an exogenous life shock on crime. We use data from the Fragile Families and Child Wellbeing Study, augmented with information from hospital medical records, to estimate the effects of the birth of a child with a severe health problem on the likelihood that the infant’s father engages in illegal activities. We conduct a number of auxiliary analyses to examine exogeneity assumptions. We find that having an infant born with a severe health condition increases the likelihood that the father is convicted of a crime in the three-year period following the birth of the child, and at least part of the effect appears to operate through work and changes in parental relationships. These results provide evidence that life events can cause crime and, as such, support the “turning point” hypothesis. PMID:21660628

  20. Hypothesis testing for differentially correlated features.

    Science.gov (United States)

    Sheng, Elisa; Witten, Daniela; Zhou, Xiao-Hua

    2016-10-01

    In a multivariate setting, we consider the task of identifying features whose correlations with the other features differ across conditions. Such correlation shifts may occur independently of mean shifts, or differences in the means of the individual features across conditions. Previous approaches for detecting correlation shifts consider features simultaneously, by computing a correlation-based test statistic for each feature. However, since correlations involve two features, such approaches do not lend themselves to identifying which feature is the culprit. In this article, we instead consider a serial testing approach, by comparing columns of the sample correlation matrix across two conditions, and removing one feature at a time. Our method provides a novel perspective and favorable empirical results compared with competing approaches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Testing the junk-food hypothesis on marine birds: Effects of prey type on growth and development

    Science.gov (United States)

    Romano, Marc D.; Piatt, John F.; Roby, D.D.

    2006-01-01

    The junk-food hypothesis attributes declines in productivity of marine birds and mammals to changes in the species of prey they consume and corresponding differences in nutritional quality of those prey. To test this hypothesis nestling Black-legged Kittiwakes (Rissa tridactyla) and Tufted Puffins (Fratercula cirrhata) were raised in captivity under controlled conditions to determine whether the type and quality of fish consumed by young seabirds constrains their growth and development. Some nestlings were fed rations of Capelin (Mallotus villosus), Herring (Clupea pallasi) or Sand Lance (Ammodytes hexapterus) and their growth was compared with nestlings raised on equal biomass rations of Walleye Pollock (Theragra chalcograma). Nestlings fed rations of herring, sand lance, or capelin experienced higher growth increments than nestlings fed pollock. The energy density of forage fish fed to nestlings had a marked effect on growth increments and could be expected to have an effect on pre- and post-fledging survival of nestlings in the wild. These results provide empirical support for the junk-food hypothesis.

  2. Prevalence of hardcore smoking in the Netherlands between 2001 and 2012: a test of the hardening hypothesis

    Directory of Open Access Journals (Sweden)

    Jeroen Bommelé

    2016-08-01

    Full Text Available Abstract Background Hardcore smokers are smokers who have smoked for many years and who do not intend to quit smoking. The “hardening hypothesis” states that light smokers are more likely to quit smoking than heavy smokers (such as hardcore smokers. Therefore, the prevalence of hardcore smoking among smokers would increase over time. If this is true, the smoking population would become harder to reach with tobacco control measures. In this study we tested the hardening hypothesis. Methods We calculated the prevalence of hardcore smoking in the Netherlands from 2001 to 2012. Smokers were ‘hardcore’ if they a smoked every day, b smoked on average 15 cigarettes per day or more, c had not attempted to quit in the past 12 months, and d had no intention to quit within 6 months. We used logistic regression models to test whether the prevalence changed over time. We also investigated whether trends differed between educational levels. Results Among smokers, the prevalence of hardcore smoking decreased from 40.8 % in 2001 to 32.2 % in 2012. In the general population, it decreased from 12.2 to 8.2 %. Hardcore smokers were significantly lower educated than non-hardcore smokers. Among the general population, the prevalence of hardcore smoking decreased more among higher educated people than among lower educated people. Conclusions We found no support for the hardening hypothesis in the Netherlands between 2001 and 2012. Instead, the decrease of hardcore smoking among smokers suggests a ‘softening’ of the smoking population.

  3. Testing the EKC hypothesis by considering trade openness, urbanization, and financial development: the case of Turkey.

    Science.gov (United States)

    Ozatac, Nesrin; Gokmenoglu, Korhan K; Taspinar, Nigar

    2017-07-01

    This study investigates the environmental Kuznets curve (EKC) hypothesis for the case of Turkey from 1960 to 2013 by considering energy consumption, trade, urbanization, and financial development variables. Although previous literature examines various aspects of the EKC hypothesis for the case of Turkey, our model augments the basic model with several covariates to develop a better understanding of the relationship among the variables and to refrain from omitted variable bias. The results of the bounds test and the error correction model under autoregressive distributed lag mechanism suggest long-run relationships among the variables as well as proof of the EKC and the scale effect in Turkey. A conditional Granger causality test reveals that there are causal relationships among the variables. Our findings can have policy implications including the imposition of a "polluter pays" mechanism, such as the implementation of a carbon tax for pollution trading, to raise the urban population's awareness about the importance of adopting renewable energy and to support clean, environmentally friendly technology.

  4. Odegaard's selection hypothesis revisited : Schizophrenia in Surinamese immigrants to the Netherlands

    NARCIS (Netherlands)

    Selten, JP; Cantor-Graae, E; Slaets, J; Kahn, RS

    Objective. The incidence of schizophrenia among Surinamese immigrants to the Netherlands is high. The authors tested Odegaard's hypothesis that this phenomenon is explained by selective migration. Method: The authors imagined that migration from Surinam to the Netherlands subsumed the entire

  5. Personality and Behavior in Social Dilemmas: Testing the Situational Strength Hypothesis and the Role of Hypothetical Versus Real Incentives.

    Science.gov (United States)

    Lozano, José H

    2016-02-01

    Previous research aimed at testing the situational strength hypothesis suffers from serious limitations regarding the conceptualization of strength. In order to overcome these limitations, the present study attempts to test the situational strength hypothesis based on the operationalization of strength as reinforcement contingencies. One dispositional factor of proven effect on cooperative behavior, social value orientation (SVO), was used as a predictor of behavior in four social dilemmas with varying degree of situational strength. The moderating role of incentive condition (hypothetical vs. real) on the relationship between SVO and behavior was also tested. One hundred undergraduates were presented with the four social dilemmas and the Social Value Orientation Scale. One-half of the sample played the social dilemmas using real incentives, whereas the other half used hypothetical incentives. Results supported the situational strength hypothesis in that no behavioral variability and no effect of SVO on behavior were found in the strongest situation. However, situational strength did not moderate the effect of SVO on behavior in situations where behavior showed variability. No moderating effect was found for incentive condition either. The implications of these results for personality theory and assessment are discussed. © 2014 Wiley Periodicals, Inc.

  6. A test of the symbol interdependency hypothesis with both concrete and abstract stimuli

    Science.gov (United States)

    Buchanan, Lori

    2018-01-01

    In Experiment 1, the symbol interdependency hypothesis was tested with both concrete and abstract stimuli. Symbolic (i.e., semantic neighbourhood distance) and embodied (i.e., iconicity) factors were manipulated in two tasks—one that tapped symbolic relations (i.e., semantic relatedness judgment) and another that tapped embodied relations (i.e., iconicity judgment). Results supported the symbol interdependency hypothesis in that the symbolic factor was recruited for the semantic relatedness task and the embodied factor was recruited for the iconicity task. Across tasks, and especially in the iconicity task, abstract stimuli resulted in shorter RTs. This finding was in contrast to the concreteness effect where concrete words result in shorter RTs. Experiment 2 followed up on this finding by replicating the iconicity task from Experiment 1 in an ERP paradigm. Behavioural results continued to show a reverse concreteness effect with shorter RTs for abstract stimuli. However, ERP results paralleled the N400 and anterior N700 concreteness effects found in the literature, with more negative amplitudes for concrete stimuli. PMID:29590121

  7. A test of the symbol interdependency hypothesis with both concrete and abstract stimuli.

    Science.gov (United States)

    Malhi, Simritpal Kaur; Buchanan, Lori

    2018-01-01

    In Experiment 1, the symbol interdependency hypothesis was tested with both concrete and abstract stimuli. Symbolic (i.e., semantic neighbourhood distance) and embodied (i.e., iconicity) factors were manipulated in two tasks-one that tapped symbolic relations (i.e., semantic relatedness judgment) and another that tapped embodied relations (i.e., iconicity judgment). Results supported the symbol interdependency hypothesis in that the symbolic factor was recruited for the semantic relatedness task and the embodied factor was recruited for the iconicity task. Across tasks, and especially in the iconicity task, abstract stimuli resulted in shorter RTs. This finding was in contrast to the concreteness effect where concrete words result in shorter RTs. Experiment 2 followed up on this finding by replicating the iconicity task from Experiment 1 in an ERP paradigm. Behavioural results continued to show a reverse concreteness effect with shorter RTs for abstract stimuli. However, ERP results paralleled the N400 and anterior N700 concreteness effects found in the literature, with more negative amplitudes for concrete stimuli.

  8. Testing the ‘Residential Rootedness’-Hypothesis of Self-Employment for Germany and the UK (discussion paper)

    NARCIS (Netherlands)

    Reuschke, D.; Van Ham, M.

    2011-01-01

    Based on the notion that entrepreneurship is a ‘local event’, the literature argues that selfemployed workers and entrepreneurs are ‘rooted’ in place. This paper tests the ‘residential rootedness’-hypothesis of self-employment by examining for Germany and the UK whether the self-employed are less

  9. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    Science.gov (United States)

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  10. Invited Commentary: Can Issues With Reproducibility in Science Be Blamed on Hypothesis Testing?

    Science.gov (United States)

    Weinberg, Clarice R.

    2017-01-01

    Abstract In the accompanying article (Am J Epidemiol. 2017;186(6):646–647), Dr. Timothy Lash makes a forceful case that the problems with reproducibility in science stem from our “culture” of null hypothesis significance testing. He notes that when attention is selectively given to statistically significant findings, the estimated effects will be systematically biased away from the null. Here I revisit the recent history of genetic epidemiology and argue for retaining statistical testing as an important part of the tool kit. Particularly when many factors are considered in an agnostic way, in what Lash calls “innovative” research, investigators need a selection strategy to identify which findings are most likely to be genuine, and hence worthy of further study. PMID:28938713

  11. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  12. Neuroticism, intelligence, and intra-individual variability in elementary cognitive tasks: testing the mental noise hypothesis.

    Science.gov (United States)

    Colom, Roberto; Quiroga, Ma Angeles

    2009-08-01

    Some studies show positive correlations between intraindividual variability in elementary speed measures (reflecting processing efficiency) and individual differences in neuroticism (reflecting instability in behaviour). The so-called neural noise hypothesis assumes that higher levels of noise are related both to smaller indices of processing efficiency and greater levels of neuroticism. Here, we test this hypothesis measuring mental speed by means of three elementary cognitive tasks tapping similar basic processes but varying systematically their content (verbal, numerical, and spatial). Neuroticism and intelligence are also measured. The sample comprised 196 undergraduate psychology students. The results show that (1) processing efficiency is generally unrelated to individual differences in neuroticism, (2) processing speed and efficiency correlate with intelligence, and (3) only the efficiency index is genuinely related to intelligence when the colinearity between speed and efficiency is controlled.

  13. SAR-based change detection using hypothesis testing and Markov random field modelling

    Science.gov (United States)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  14. Hypothesis testing in the Maimai Catchments, Westland

    International Nuclear Information System (INIS)

    Stewart, M.K.

    1993-01-01

    Seven experiments were carried out on the Maimai Catchments, Westland, to test assumptions about the nature of unsaturated zone waters flows in this humid environment. Hypotheses tested were: 1) that the deuterium (D) content of base flow water sources in small streams are constant at any given time, 2) that different soil moisture sampling methods give the same D contents, 3) that throughfall has the same D content as rainfall, 4) that saturation overland flow is mainly composed of current event rainfall, 5) that macropores are not connected into pipe networks, 6) that the underlying substrate (Old Man Gravel conglomerate) does not deliver water to the stream during rainfall events, and 7) that different near-stream water sources have the same D contents at a given time. Over 570 samples were collected of which 300 were analysed for deuterium in 1992-1993. This report gives the background, rationale, methods and brief results of the experiments. The results will be integrated with other measurements and written up in one or more papers for journal publication. (author). 18 refs.; 4 figs.; 1 tab

  15. Assessment of Resampling Methods for Causality Testing: A note on the US Inflation Behavior

    NARCIS (Netherlands)

    Papana, A.; Kyrtsou, C.; Kugiumtzis, D.; Diks, C.

    2017-01-01

    Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial

  16. Searching for degenerate Higgs bosons a profile likelihood ratio method to test for mass-degenerate states in the presence of censored data and uncertainties

    CERN Document Server

    David, André; Petrucciani, Giovanni

    2015-01-01

    Using the likelihood ratio test statistic, we present a method which can be employed to test the hypothesis of a single Higgs boson using the matrix of measured signal strengths. This method can be applied in the presence of censored data and takes into account uncertainties on the measurements. The p-value against the hypothesis of a single Higgs boson is defined from the expected distribution of the test statistic, generated using pseudo-experiments. The applicability of the likelihood-based test is demonstrated using numerical examples with uncertainties and missing matrix elements.

  17. Testing the stress-gradient hypothesis during the restoration of tropical degraded land using the shrub Rhodomyrtus tomentosa as a nurse plant

    Science.gov (United States)

    Nan Liu; Hai Ren; Sufen Yuan; Qinfeng Guo; Long Yang

    2013-01-01

    The relative importance of facilitation and competition between pairwise plants across abiotic stress gradients as predicted by the stress-gradient hypothesis has been confirmed in arid and temperate ecosystems, but the hypothesis has rarely been tested in tropical systems, particularly across nutrient gradients. The current research examines the interactions between a...

  18. Is intuition really cooperative? Improved tests support the social heuristics hypothesis.

    Science.gov (United States)

    Isler, Ozan; Maule, John; Starmer, Chris

    2018-01-01

    Understanding human cooperation is a major scientific challenge. While cooperation is typically explained with reference to individual preferences, a recent cognitive process view hypothesized that cooperation is regulated by socially acquired heuristics. Evidence for the social heuristics hypothesis rests on experiments showing that time-pressure promotes cooperation, a result that can be interpreted as demonstrating that intuition promotes cooperation. This interpretation, however, is highly contested because of two potential confounds. First, in pivotal studies compliance with time-limits is low and, crucially, evidence shows intuitive cooperation only when noncompliant participants are excluded. The inconsistency of test results has led to the currently unresolved controversy regarding whether or not noncompliant subjects should be included in the analysis. Second, many studies show high levels of social dilemma misunderstanding, leading to speculation that asymmetries in understanding might explain patterns that are otherwise interpreted as intuitive cooperation. We present evidence from an experiment that employs an improved time-pressure protocol with new features designed to induce high levels of compliance and clear tests of understanding. Our study resolves the noncompliance issue, shows that misunderstanding does not confound tests of intuitive cooperation, and provides the first independent experimental evidence for intuitive cooperation in a social dilemma using time-pressure.

  19. Nonparametric Bayes Classification and Hypothesis Testing on Manifolds

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David

    2012-01-01

    Our first focus is prediction of a categorical response variable using features that lie on a general manifold. For example, the manifold may correspond to the surface of a hypersphere. We propose a general kernel mixture model for the joint distribution of the response and predictors, with the kernel expressed in product form and dependence induced through the unknown mixing measure. We provide simple sufficient conditions for large support and weak and strong posterior consistency in estimating both the joint distribution of the response and predictors and the conditional distribution of the response. Focusing on a Dirichlet process prior for the mixing measure, these conditions hold using von Mises-Fisher kernels when the manifold is the unit hypersphere. In this case, Bayesian methods are developed for efficient posterior computation using slice sampling. Next we develop Bayesian nonparametric methods for testing whether there is a difference in distributions between groups of observations on the manifold having unknown densities. We prove consistency of the Bayes factor and develop efficient computational methods for its calculation. The proposed classification and testing methods are evaluated using simulation examples and applied to spherical data applications. PMID:22754028

  20. Visual working memory and number sense: Testing the double deficit hypothesis in mathematics.

    Science.gov (United States)

    Toll, Sylke W M; Kroesbergen, Evelyn H; Van Luit, Johannes E H

    2016-09-01

    Evidence exists that there are two main underlying cognitive factors in mathematical difficulties: working memory and number sense. It is suggested that real math difficulties appear when both working memory and number sense are weak, here referred to as the double deficit (DD) hypothesis. The aim of this study was to test the DD hypothesis within a longitudinal time span of 2 years. A total of 670 children participated. The mean age was 4.96 years at the start of the study and 7.02 years at the end of the study. At the end of the first year of kindergarten, both visual-spatial working memory and number sense were measured by two different tasks. At the end of first grade, mathematical performance was measured with two tasks, one for math facts and one for math problems. Multiple regressions revealed that both visual working memory and symbolic number sense are predictors of mathematical performance in first grade. Symbolic number sense appears to be the strongest predictor for both math areas (math facts and math problems). Non-symbolic number sense only predicts performance in math problems. Multivariate analyses of variance showed that a combination of visual working memory and number sense deficits (NSDs) leads to the lowest performance on mathematics. Our DD hypothesis was confirmed. Both visual working memory and symbolic number sense in kindergarten are related to mathematical performance 2 years later, and a combination of visual working memory and NSDs leads to low performance in mathematical performance. © 2016 The British Psychological Society.

  1. Recurrence network measures for hypothesis testing using surrogate data: Application to black hole light curves

    Science.gov (United States)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2018-01-01

    Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.

  2. Secretive Food Concocting in Binge Eating: Test of a Famine Hypothesis

    Science.gov (United States)

    Boggiano, Mary M.; Turan, Bulent; Maldonado, Christine R.; Oswald, Kimberly D.; Shuman, Ellen S.

    2016-01-01

    Objective Food concocting, or making strange food mixtures, is well documented in the famine and experimental semistarvation literature and appears anecdotally in rare descriptions of eating disorder (ED) patients but has never been scientifically investigated. Here we do so in the context of binge-eating using a “famine hypothesis of concocting.” Method A sample of 552 adults varying in binge eating and dieting traits completed a Concocting Survey created for this study. Exploratory ED groups were created to obtain predictions as to the nature of concocting in clinical populations. Results Binge eating predicted the 24.6% of participants who reported having ever concocted but dietary restraint, independently, even after controlling for binge eating, predicted its frequency and salience. Craving was the main motive. Emotions while concocting mirrored classic high-arousal symptoms associated with drug use; while eating the concoctions were associated with intensely negative/self-deprecating emotions. Concocting prevalence and salience was greater in the anorexia > bulimia > BED > no ED groups, consistent with their respectively incrementing dieting scores. Discussion Concocting distinguishes binge eating from other overeating and, consistent with the famine hypothesis, is accounted for by dietary restraint. Unlike its adaptive function in famine, concocting could worsen binge-eating disorders by increasing negative effect, shame, and secrecy. Its assessment in these disorders may prove therapeutically valuable. PMID:23255044

  3. Aminoglycoside antibiotics and autism: a speculative hypothesis

    Directory of Open Access Journals (Sweden)

    Manev Hari

    2001-10-01

    Full Text Available Abstract Background Recently, it has been suspected that there is a relationship between therapy with some antibiotics and the onset of autism; but even more curious, some children benefited transiently from a subsequent treatment with a different antibiotic. Here, we speculate how aminoglycoside antibiotics might be associated with autism. Presentation We hypothesize that aminoglycoside antibiotics could a trigger the autism syndrome in susceptible infants by causing the stop codon readthrough, i.e., a misreading of the genetic code of a hypothetical critical gene, and/or b improve autism symptoms by correcting the premature stop codon mutation in a hypothetical polymorphic gene linked to autism. Testing Investigate, retrospectively, whether a link exists between aminoglycoside use (which is not extensive in children and the onset of autism symptoms (hypothesis "a", or between amino glycoside use and improvement of these symptoms (hypothesis "b". Whereas a prospective study to test hypothesis "a" is not ethically justifiable, a study could be designed to test hypothesis "b". Implications It should be stressed that at this stage no direct evidence supports our speculative hypothesis and that its main purpose is to initiate development of new ideas that, eventually, would improve our understanding of the pathobiology of autism.

  4. Efficient Market Hypothesis in South Africa: Evidence from Linear and Nonlinear Unit Root Tests

    Directory of Open Access Journals (Sweden)

    Andrew Phiri

    2015-12-01

    Full Text Available This study investigates the weak form efficient market hypothesis (EMH for five generalized stock indices in the Johannesburg Stock Exchange (JSE using weekly data collected from 31st January 2000 to 16th December 2014. In particular, we test for weak form market efficiency using a battery of linear and nonlinear unit root testing procedures comprising of the classical augmented Dickey-Fuller (ADF tests, the two-regime threshold autoregressive (TAR unit root tests described in Enders and Granger (1998 as well as the three-regime unit root tests described in Bec, Salem, and Carrasco (2004. Based on our empirical analysis, we are able to demonstrate that whilst the linear unit root tests advocate for unit roots within the time series, the nonlinear unit root tests suggest that most stock indices are threshold stationary processes. These results bridge two opposing contentions obtained from previous studies by concluding that under a linear framework the JSE stock indices offer support in favour of weak form market efficiency whereas when nonlinearity is accounted for, a majority of the indices violate the weak form EMH.

  5. Monte Carlo, hypothesis-tests for rare events superimposed on a background

    International Nuclear Information System (INIS)

    Avignone, F.T. III; Miley, H.S.; Padgett, W.J.; Weier, D.W.

    1985-01-01

    We describe two techniques to search for small numbers of counts under a peak of known shape and superimposed on a background with statistical fluctuations. Many comparisons of a single experimental spectrum with computer simulations of the peak and background are made. From these we calculate the probability that y hypothesized counts in the peaks of the simulations, will result in a number larger than that observed in a given energy interval (bin) in the experimental spectrum. This is done for many values of the hypothesized number y. One procedure is very similar to testing a statistical hypothesis and can be analytically applied. Another is presented which is related to pattern recognition techniques and is less sensitive to the uncertainty in the mean. Sample applications to double beta decay data are presented. (orig.)

  6. Testing the Developmental Origins of Health and Disease Hypothesis for Psychopathology Using Family-Based Quasi-Experimental Designs

    Science.gov (United States)

    D’Onofrio, Brian M.; Class, Quetzal A.; Lahey, Benjamin B.; Larsson, Henrik

    2014-01-01

    The Developmental Origin of Health and Disease (DOHaD) hypothesis is a broad theoretical framework that emphasizes how early risk factors have a causal influence on psychopathology. Researchers have raised concerns about the causal interpretation of statistical associations between early risk factors and later psychopathology because most existing studies have been unable to rule out the possibility of environmental and genetic confounding. In this paper we illustrate how family-based quasi-experimental designs can test the DOHaD hypothesis by ruling out alternative hypotheses. We review the logic underlying sibling-comparison, co-twin control, offspring of siblings/twins, adoption, and in vitro fertilization designs. We then present results from studies using these designs focused on broad indices of fetal development (low birth weight and gestational age) and a particular teratogen, smoking during pregnancy. The results provide mixed support for the DOHaD hypothesis for psychopathology, illustrating the critical need to use design features that rule out unmeasured confounding. PMID:25364377

  7. Testing the ``Wildfire Hypothesis:'' Terrestrial Organic Carbon Burning as the Cause of the Paleocene-Eocene Boundary Carbon Isotope Excursion

    Science.gov (United States)

    Moore, E. A.; Kurtz, A. C.

    2005-12-01

    The 3‰ negative carbon isotope excursion (CIE) at the Paleocene-Eocene boundary has generally been attributed to dissociation of seafloor methane hydrates. We are testing the alternative hypothesis that the carbon cycle perturbation resulted from wildfires affecting the extensive peatlands and coal swamps formed in the Paleocene. Accounting for the CIE with terrestrial organic carbon rather than methane requires a significantly larger net release of fossil carbon to the ocean-atmosphere, which may be more consistent with the extreme global warming and ocean acidification characteristic of the Paleocene-Eocene Thermal Maximum (PETM). While other researchers have noted evidence of fires at the Paleocene-Eocene boundary in individual locations, the research presented here is designed to test the "wildfire hypothesis" for the Paleocene-Eocene boundary by examining marine sediments for evidence of a global increase in wildfire activity. Such fires would produce massive amounts of soot, widely distributed by wind and well preserved in marine sediments as refractory black carbon. We expect that global wildfires occurring at the Paleocene-Eocene boundary would produce a peak in black carbon abundance at the PETM horizon. We are using the method of Gelinas et al. (2001) to produce high-resolution concentration profiles of black carbon across the Paleocene-Eocene boundary using seafloor sediments from ODP cores, beginning with the Bass River core from ODP leg 174AX and site 1209 from ODP leg 198. This method involves the chemical and thermal extraction of non-refractory carbon followed by combustion of the residual black carbon and measurement as CO2. Measurement of the δ 13C of the black carbon will put additional constraints on the source of the organic material combusted, and will allow us to determine if this organic material was formed prior to or during the CIE.

  8. Test of the decaying dark matter hypothesis using the Hopkins Ultraviolet Telescope

    Science.gov (United States)

    Davidsen, A. F.; Kriss, G. A.; Ferguson, H. C.; Blair, W. P.; Bowers, C. W.; Kimble, R. A.

    1991-01-01

    Sciama's hypothesis that the dark matter associated with galaxies, galaxy clusters, and the intergalactic medium consists of tau neutrinos of rest mass 28-30 eV whose decay generates ultraviolet photons of energy roughly 14-15 eV, has been tested using the Hopkins Ultraviolet Telescope flows aboard the Space Shuttle Columbia. A straightforward application of Sciama's model predicts that a spectral line from neutrino decay photons should be observed from the rich galaxy cluster Abell 665 with an SNR of about 30. No such emission was detected. For neutrinos in the mass range 27.2-32.1 eV, the observations set a lower lifetime limit significantly greater than Sciama's model requires.

  9. Effects of arousal on cognitive control: empirical tests of the conflict-modulated Hebbian-learning hypothesis.

    Science.gov (United States)

    Brown, Stephen B R E; van Steenbergen, Henk; Kedar, Tomer; Nieuwenhuis, Sander

    2014-01-01

    An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part) simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect) decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.

  10. Effects of arousal on cognitive control: Empirical tests of the conflict-modulated Hebbian-learning hypothesis

    Directory of Open Access Journals (Sweden)

    Stephen B.R.E. Brown

    2014-01-01

    Full Text Available An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.

  11. Molecular phylogeny of selected species of the order Dinophysiales (Dinophyceae) - testing the hypothesis of a Dinophysioid radiation

    DEFF Research Database (Denmark)

    Jensen, Maria Hastrup; Daugbjerg, Niels

    2009-01-01

    additional information on morphology and ecology to these evolutionary lineages. We have for the first time combined morphological information with molecular phylogenies to test the dinophysioid radiation hypothesis in a modern context. Nuclear-encoded LSU rDNA sequences including domains D1-D6 from 27...

  12. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Empirical tests of the Chicago model and the Easterlin hypothesis: a case study of Japan.

    Science.gov (United States)

    Ohbuchi, H

    1982-05-01

    The objective of this discussion is to test the applicability of economic theory of fertility with special reference to postwar Japan and to find a clue for forecasting the future trend of fertility. The theories examined are the "Chicago model" and the "Easterlin hypothesis." The major conclusion common among the leading economic theories of fertility, which have their origin with Gary S. Becker (1960, 1965) and Richard A. Easterlin (1966), is the positive income effect, i.e., that the relationship between income and fertility is positive despite the evidence that higher income families have fewer children and that fertility has declined with economic development. To bridge the gap between theory and fact is the primary purpose of the economic theory of fertility, and each offers a different interpretation for it. The point of the Chicago model, particularly of the household decision making model of the "new home economics," is the mechanism that a positive effect of husband's income growth on fertility is offset by a negative price effect caused by the opportunity cost of wife's time. While the opportunity cost of wife's time is independent of the female wage rate for an unemployed wife, it is directly associated with the wage rate for a gainfully employed wife. Thus, the fertility response to female wages occurs only among families with an employed wife. The primary concern of empirical efforts to test the Chicago model has been with the determination of income and price elasticities. An attempt is made to test the relevance of the Chicago model and the Easterlin hypothesis in explaning the fertility movement in postwar Japan. In case of the Chicago model, the statistical results appeared fairly successful but did not match with the theory. The effect on fertility of a rise in women's real wage (and, therefore in the opportunity cost of mother's time) and of a rise in labor force participation rate of married women of childbearing age in recent years could not

  14. Testing the status-legitimacy hypothesis: A multilevel modeling approach to the perception of legitimacy in income distribution in 36 nations.

    Science.gov (United States)

    Caricati, Luca

    2017-01-01

    The status-legitimacy hypothesis was tested by analyzing cross-national data about social inequality. Several indicators were used as indexes of social advantage: social class, personal income, and self-position in the social hierarchy. Moreover, inequality and freedom in nations, as indexed by Gini and by the human freedom index, were considered. Results from 36 nations worldwide showed no support for the status-legitimacy hypothesis. The perception that income distribution was fair tended to increase as social advantage increased. Moreover, national context increased the difference between advantaged and disadvantaged people in the perception of social fairness: Contrary to the status-legitimacy hypothesis, disadvantaged people were more likely than advantaged people to perceive income distribution as too large, and this difference increased in nations with greater freedom and equality. The implications for the status-legitimacy hypothesis are discussed.

  15. Mastery Learning and the Decreasing Variability Hypothesis.

    Science.gov (United States)

    Livingston, Jennifer A.; Gentile, J. Ronald

    1996-01-01

    This report results from studies that tested two variations of Bloom's decreasing variability hypothesis using performance on successive units of achievement in four graduate classrooms that used mastery learning procedures. Data do not support the decreasing variability hypothesis; rather, they show no change over time. (SM)

  16. Testing the Rational Expectations Hypothesis on the Retail Trade Sector Using Survey Data from Malaysia

    OpenAIRE

    Puah, Chin-Hong; Chong, Lucy Lee-Yun; Jais, Mohamad

    2011-01-01

    The rational expectations hypothesis states that when people are expecting things to happen, using the available information, the predicted outcomes usually occur. This study utilized survey data provided by the Business Expectations Survey of Limited Companies to test whether forecasts of the Malaysian retail sector, based on gross revenue and capital expenditures, are rational. The empirical evidence illustrates that the decision-makers expectations in the retail sector are biased and too o...

  17. Resampling-based methods in single and multiple testing for equality of covariance/correlation matrices.

    Science.gov (United States)

    Yang, Yang; DeGruttola, Victor

    2012-06-22

    Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.

  18. Testing the Münch hypothesis of long distance phloem transport in plants

    DEFF Research Database (Denmark)

    Knoblauch, Michael; Knoblauch, Jan; Mullendore, Daniel L.

    2016-01-01

    Long distance transport in plants occurs in sieve tubes of the phloem. The pressure flow hypothesis introduced by Ernst Münch in 1930 describes a mechanism of osmotically generated pressure differentials that are supposed to drive the movement of sugars and other solutes in the phloem, but this h......Long distance transport in plants occurs in sieve tubes of the phloem. The pressure flow hypothesis introduced by Ernst Münch in 1930 describes a mechanism of osmotically generated pressure differentials that are supposed to drive the movement of sugars and other solutes in the phloem......, but this hypothesis has long faced major challenges. The key issue is whether the conductance of sieve tubes, including sieve plate pores, is sufficient to allow pressure flow. We show that with increasing distance between source and sink, sieve tube conductivity and turgor increases dramatically in Ipomoea nil. Our...... results provide strong support for the Münch hypothesis, while providing new tools for the investigation of one of the least understood plant tissues....

  19. A test of the cerebellar hypothesis of dyslexia in adequate and inadequate responders to reading intervention.

    Science.gov (United States)

    Barth, Amy E; Denton, Carolyn A; Stuebing, Karla K; Fletcher, Jack M; Cirino, Paul T; Francis, David J; Vaughn, Sharon

    2010-05-01

    The cerebellar hypothesis of dyslexia posits that cerebellar deficits are associated with reading disabilities and may explain why some individuals with reading disabilities fail to respond to reading interventions. We tested these hypotheses in a sample of children who participated in a grade 1 reading intervention study (n = 174) and a group of typically achieving children (n = 62). At posttest, children were classified as adequately responding to the intervention (n = 82), inadequately responding with decoding and fluency deficits (n = 36), or inadequately responding with only fluency deficits (n = 56). Based on the Bead Threading and Postural Stability subtests from the Dyslexia Screening Test-Junior, we found little evidence that assessments of cerebellar functions were associated with academic performance or responder status. In addition, we did not find evidence supporting the hypothesis that cerebellar deficits are more prominent for poor readers with "specific" reading disabilities (i.e., with discrepancies relative to IQ) than for poor readers with reading scores consistent with IQ. In contrast, measures of phonological awareness, rapid naming, and vocabulary were strongly associated with responder status and academic outcomes. These results add to accumulating evidence that fails to associate cerebellar functions with reading difficulties.

  20. Test of the prey-base hypothesis to explain use of red squirrel midden sites by American martens

    Science.gov (United States)

    Dean E. Pearson; Leonard F. Ruggiero

    2001-01-01

    We tested the prey-base hypothesis to determine whether selection of red squirrel (Tamiasciurus hudsonicus) midden sites (cone caches) by American martens (Martes americana) for resting and denning could be attributed to greater abundance of small-mammal prey. Five years of livetrapping at 180 sampling stations in 2 drainages showed that small mammals,...

  1. The Effect of Retention Interval Task Difficulty on Young Children's Prospective Memory: Testing the Intention Monitoring Hypothesis

    Science.gov (United States)

    Mahy, Caitlin E. V.; Moses, Louis J.

    2015-01-01

    The current study examined the impact of retention interval task difficulty on 4- and 5-year-olds' prospective memory (PM) to test the hypothesis that children periodically monitor their intentions during the retention interval and that disrupting this monitoring may result in poorer PM performance. In addition, relations among PM, working memory,…

  2. Testing the accelerating moment release (AMR) hypothesis in areas of high stress

    Science.gov (United States)

    Guilhem, Aurélie; Bürgmann, Roland; Freed, Andrew M.; Ali, Syed Tabrez

    2013-11-01

    Several retrospective analyses have proposed that significant increases in moment release occurred prior to many large earthquakes of recent times. However, the finding of accelerating moment release (AMR) strongly depends on the choice of three parameters: (1) magnitude range, (2) area being considered surrounding the events and (3) the time period prior to the large earthquakes. Consequently, the AMR analysis has been criticized as being a posteriori data-fitting exercise with no new predictive power. As AMR has been hypothesized to relate to changes in the state of stress around the eventual epicentre, we compare here AMR results to models of stress accumulation in California. Instead of assuming a complete stress drop on all surrounding fault segments implied by a back-slip stress lobe method, we consider that stress evolves dynamically, punctuated by the occurrence of earthquakes, and governed by the elastic and viscous properties of the lithosphere. We study the seismicity of southern California and extract events for AMR calculations following the systematic approach employed in previous studies. We present several sensitivity tests of the method, as well as grid-search analyses over the region between 1955 and 2005 using fixed magnitude range, radius of the search area and period of time. The results are compared to the occurrence of large events and to maps of Coulomb stress changes. The Coulomb stress maps are compiled using the coseismic stress from all M > 7.0 earthquakes since 1812, their subsequent post-seismic relaxation, and the interseismic strain accumulation. We find no convincing correlation of seismicity rate changes in recent decades with areas of high stress that would support the AMR hypothesis. Furthermore, this indicates limited utility for practical earthquake hazard analysis in southern California, and possibly other regions.

  3. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  4. Generalist predator, cyclic voles and cavity nests: testing the alternative prey hypothesis.

    Science.gov (United States)

    Pöysä, Hannu; Jalava, Kaisa; Paasivaara, Antti

    2016-12-01

    The alternative prey hypothesis (APH) states that when the density of the main prey declines, generalist predators switch to alternative prey and vice versa, meaning that predation pressure on the alternative prey should be negatively correlated with the density of the main prey. We tested the APH in a system comprising one generalist predator (pine marten, Martes martes), cyclic main prey (microtine voles, Microtus agrestis and Myodes glareolus) and alternative prey (cavity nests of common goldeneye, Bucephala clangula); pine marten is an important predator of both voles and common goldeneye nests. Specifically, we studied whether annual predation rate of real common goldeneye nests and experimental nests is negatively associated with fluctuation in the density of voles in four study areas in southern Finland in 2000-2011. Both vole density and nest predation rate varied considerably between years in all study areas. However, we did not find support for the hypothesis that vole dynamics indirectly affects predation rate of cavity nests in the way predicted by the APH. On the contrary, the probability of predation increased with vole spring abundance for both real and experimental nests. Furthermore, a crash in vole abundance from previous autumn to spring did not increase the probability of predation of real nests, although it increased that of experimental nests. We suggest that learned predation by pine marten individuals, coupled with efficient search image for cavities, overrides possible indirect positive effects of high vole density on the alternative prey in our study system.

  5. Testing the gravitational instability hypothesis?

    Science.gov (United States)

    Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.

    1994-01-01

    We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests

  6. Testing the assumptions of the pyrodiversity begets biodiversity hypothesis for termites in semi-arid Australia.

    Science.gov (United States)

    Davis, Hayley; Ritchie, Euan G; Avitabile, Sarah; Doherty, Tim; Nimmo, Dale G

    2018-04-01

    Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species' probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary.

  7. Does Portuguese economy support crude oil conservation hypothesis?

    International Nuclear Information System (INIS)

    Bashiri Behmiri, Niaz; Pires Manso, José R.

    2012-01-01

    This paper examines cointegration relationships and Granger causality nexus in a trivariate framework among oil consumption, economic growth and international oil price in Portugal. For this purpose, we employ two Granger causality approaches: the Johansen cointegration test and vector error correction model (VECM) and the Toda–Yamamoto approaches. Cointegration test proves the existence of a long run equilibrium relationship among these variables and VECM and Toda–Yamamoto Granger causality tests indicate that there is bidirectional causality between crude oil consumption and economic growth (feed back hypothesis). Therefore, the Portuguese economy does not support crude oil conservation hypothesis. Consequently, policymakers should consider that implementing oil conservation and environmental policies may negatively impact on the Portuguese economic growth. - Highlights: ► We examine Granger causality among oil consumption, GDP and oil price in Portugal. ► VECM and Toda–Yamamoto tests found bidirectional causality among oil and GDP. ► Portuguese economy does not support the crude oil conservation hypothesis.

  8. The Random-Walk Hypothesis on the Indian Stock Market

    OpenAIRE

    Ankita Mishra; Vinod Mishra; Russell Smyth

    2014-01-01

    This study tests the random walk hypothesis for the Indian stock market. Using 19 years of monthly data on six indices from the National Stock Exchange (NSE) and the Bombay Stock Exchange (BSE), this study applies three different unit root tests with two structural breaks to analyse the random walk hypothesis. We find that unit root tests that allow for two structural breaks alone are not able to reject the unit root null; however, a recently developed unit root test that simultaneously accou...

  9. Testing the snake-detection hypothesis: larger early posterior negativity in humans to pictures of snakes than to pictures of other reptiles, spiders and slugs

    OpenAIRE

    Van Strien, Jan W.; Franken, Ingmar H. A.; Huijding, Jorg

    2014-01-01

    According to the snake detection hypothesis (Isbell, 2006), fear specifically of snakes may have pushed evolutionary changes in the primate visual system allowing pre-attentional visual detection of fearful stimuli. A previous study demonstrated that snake pictures, when compared to spiders or bird pictures, draw more early attention as reflected by larger early posterior negativity (EPN). Here we report two studies that further tested the snake detection hypothesis. In Study, 1 we tested whe...

  10. Do the disadvantaged legitimize the social system? A large-scale test of the status-legitimacy hypothesis.

    Science.gov (United States)

    Brandt, Mark J

    2013-05-01

    System justification theory (SJT) posits that members of low-status groups are more likely to see their social systems as legitimate than members of high-status groups because members of low-status groups experience a sense of dissonance between system motivations and self/group motivations (Jost, Pelham, Sheldon, & Sullivan, 2003). The author examined the status-legitimacy hypothesis using data from 3 representative sets of data from the United States (American National Election Studies and General Social Surveys) and throughout the world (World Values Survey; total N across studies = 151,794). Multilevel models revealed that the average effect across years in the United States and countries throughout the world was most often directly contrary to the status-legitimacy hypothesis or was practically zero. In short, the status-legitimacy effect is not a robust phenomenon. Two theoretically relevant moderator variables (inequality and civil liberties) were also tested, revealing weak evidence, null evidence, or contrary evidence to the dissonance-inspired status-legitimacy hypothesis. In sum, the status-legitimacy effect is not robust and is unlikely to be the result of dissonance. These results are used to discuss future directions for research, the current state of SJT, and the interpretation of theoretically relevant but contrary and null results. PsycINFO Database Record (c) 2013 APA, all rights reserved

  11. Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis

    Science.gov (United States)

    Střelec, Luboš

    2011-09-01

    The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from

  12. Invited Commentary: Can Issues With Reproducibility in Science Be Blamed on Hypothesis Testing?

    Science.gov (United States)

    Weinberg, Clarice R

    2017-09-15

    In the accompanying article (Am J Epidemiol. 2017;186(6):646-647), Dr. Timothy Lash makes a forceful case that the problems with reproducibility in science stem from our "culture" of null hypothesis significance testing. He notes that when attention is selectively given to statistically significant findings, the estimated effects will be systematically biased away from the null. Here I revisit the recent history of genetic epidemiology and argue for retaining statistical testing as an important part of the tool kit. Particularly when many factors are considered in an agnostic way, in what Lash calls "innovative" research, investigators need a selection strategy to identify which findings are most likely to be genuine, and hence worthy of further study. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  13. Testing a hypothesis of unidirectional hybridization in plants: Observations on Sonneratia, Bruguiera and Ligularia

    Directory of Open Access Journals (Sweden)

    Wu Chung-I

    2008-05-01

    Full Text Available Abstract Background When natural hybridization occurs at sites where the hybridizing species differ in abundance, the pollen load delivered to the rare species should be predominantly from the common species. Previous authors have therefore proposed a hypothesis on the direction of hybridization: interspecific hybrids are more likely to have the female parent from the rare species and the male parent from the common species. We wish to test this hypothesis using data of plant hybridizations both from our own experimentation and from the literature. Results By examining the maternally inherited chloroplast DNA of 6 cases of F1 hybridization from four genera of plants, we infer unidirectional hybridization in most cases. In all 5 cases where the relative abundance of the parental species deviates from parity, however, the direction is predominantly in the direction opposite of the prediction based strictly on numerical abundance. Conclusion Our results show that the observed direction of hybridization is almost always opposite of the predicted direction based on the relative abundance of the hybridizing species. Several alternative hypotheses, including unidirectional postmating isolation and reinforcement of premating isolation, were discussed.

  14. Nondestructive testing method

    International Nuclear Information System (INIS)

    Porter, J.F.

    1996-01-01

    Nondestructive testing (NDT) is the use of physical and chemical methods for evaluating material integrity without impairing its intended usefulness or continuing service. Nondestructive tests are used by manufaturer's for the following reasons: 1) to ensure product reliability; 2) to prevent accidents and save human lives; 3) to aid in better product design; 4) to control manufacturing processes; and 5) to maintain a uniform quality level. Nondestructive testing is used extensively on power plants, oil and chemical refineries, offshore oil rigs and pipeline (NDT can even be conducted underwater), welds on tanks, boilers, pressure vessels and heat exchengers. NDT is now being used for testing concrete and composite materials. Because of the criticality of its application, NDT should be performed and the results evaluated by qualified personnel. There are five basic nondestructive examination methods: 1) liquid penetrant testing - method used for detecting surface flaws in materials. This method can be used for metallic and nonmetallic materials, portable and relatively inexpensive. 2) magnetic particle testing - method used to detect surface and subsurface flaws in ferromagnetic materials; 3) radiographic testing - method used to detect internal flaws and significant variation in material composition and thickness; 4) ultrasonic testing - method used to detect internal and external flaws in materials. This method uses ultrasonics to measure thickness of a material or to examine the internal structure for discontinuities. 5) eddy current testing - method used to detect surface and subsurface flaws in conductive materials. Not one nondestructive examination method can find all discontinuities in all of the materials capable of being tested. The most important consideration is for the specifier of the test to be familiar with the test method and its applicability to the type and geometry of the material and the flaws to be detected

  15. Testing Fiscal Dominance Hypothesis in a Structural VAR Specification for Pakistan

    Directory of Open Access Journals (Sweden)

    Shaheen Rozina

    2018-03-01

    Full Text Available This research aims to test the fiscal dominance hypothesis for Pakistan through a bivariate structural vector auto regression (SVAR specification, covering time period 1977 – 2016. This study employs real primary deficit (non interest government expenditures minus total revenues and real primary liabilities (sum of monetary base and domestic public debt as indicators of fiscal measures and monetary policy respectively. A structural VAR is retrieved both for entire sample period and four sub periods (1977 – 1986, 1987 – 1997, 1998 – 2008, and 2009 – 2016. This study identifies the presence of fiscal dominance for the entire sample period and the sub period from 1987 – 2008. The estimates reveal an interesting phenomenon that fiscal dominance is significant in the elected regimes and weaker in the presence of military regimes in Pakistan. From a policy perspective, this research suggests increased autonomy of central bank to achieve long term price stability and reduced administration costs to ensure efficient democratic regime in Pakistan.

  16. Test of a hypothesis of realism in quantum theory using a Bayesian approach

    Science.gov (United States)

    Nikitin, N.; Toms, K.

    2017-05-01

    In this paper we propose a time-independent equality and time-dependent inequality, suitable for an experimental test of the hypothesis of realism. The derivation of these relations is based on the concept of conditional probability and on Bayes' theorem in the framework of Kolmogorov's axiomatics of probability theory. The equality obtained is intrinsically different from the well-known Greenberger-Horne-Zeilinger (GHZ) equality and its variants, because violation of the proposed equality might be tested in experiments with only two microsystems in a maximally entangled Bell state |Ψ-> , while a test of the GHZ equality requires at least three quantum systems in a special state |ΨGHZ> . The obtained inequality differs from Bell's, Wigner's, and Leggett-Garg inequalities, because it deals with spin s =1 /2 projections onto only two nonparallel directions at two different moments of time, while a test of the Bell and Wigner inequalities requires at least three nonparallel directions, and a test of the Leggett-Garg inequalities requires at least three distinct moments of time. Hence, the proposed inequality seems to open an additional experimental possibility to avoid the "contextuality loophole." Violation of the proposed equality and inequality is illustrated with the behavior of a pair of anticorrelated spins in an external magnetic field and also with the oscillations of flavor-entangled pairs of neutral pseudoscalar mesons.

  17. Assessing Threat Detection Scenarios through Hypothesis Generation and Testing

    Science.gov (United States)

    2015-12-01

    Publications. Field, A. (2005). Discovering statistics using SPSS (2nd ed.). Thousand Oaks, CA: Sage Publications. Fisher, S. D., Gettys, C. F...therefore, subsequent F statistics are reported using the Huynh-Feldt correction (Greenhouse-Geisser Epsilon > .775). Experienced and inexperienced...change in hypothesis using experience and initial confidence as predictors. In the Dog Day scenario, the regression was not statistically

  18. Test of an Hypothesis of Magnetization, Tilt and Flow in an Hypabyssal Intrusion, Colombian Andes

    Science.gov (United States)

    Muggleton, S.; MacDonald, W. D.; Estrada, J. J.; Sierra, G. M.

    2002-05-01

    Magnetic remanence in the Miocene Clavijo intrusion in the Cauca Valley, adjacent to the Cordillera Central, plunges steeply northward (MacDonald et al., 1996). Assuming magnetization in a normal magnetic field, the expected remanence direction is approximately I= 10o, D= 000o; the observed remanence is I=84o, D=003o. The discrepancy could be explained by a 74o rotation about a horizontal E-W axis, i.e., about an axis normal to the nearby N-S trending Romeral fault zone. If the intrusion is the shallow feeder of a now-eroded andesitic volcano, then perhaps the paleovertical direction is preserved in flow lineations and provides a test of the tilt/rotation of the remanence. In combination, the steep remanence direction, vertical flow, and the inferred rotation of the volcanic neck lead to the hypothesis of a shallow-plunging southward lineation for this body. Using anisotropy of magnetic susceptibility (AMS) as a proxy for the flow lineation, it is predicted that the K1 (maximum susceptibility) axis in this body plunges gently south. This hypothesis was tested using approximately 50 oriented cores from 5 sites near the west margin of the Clavijo intrusion. The results suggest a NW plunging lineation, inconsistent with the initial hypothesis. However, a relatively consistent flow lineation is suggested by the K1 axes. If this flow axis represents paleovertical, it suggests moderate tilting of the Clavijo body towards the southeast. The results are encouraging enough to suggest that AMS may be useful for determining paleo-vertical in shallow volcanic necks and hypabyssal intrusions, and might ultimately be useful in a tilt-correction for such bodies. Other implications of the results will be discussed. MacDonald, WD, Estrada, JJ, Sierra, GM, Gonzalez, H, 1996, Late Cenozoic tectonics and paleomagnetism of North Cauca Basin intrusions, Colombian Andes: Dual rotation modes: Tectonophysics, v 261, p. 277-289.

  19. The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis

    DEFF Research Database (Denmark)

    Debrabant, Birgit

    2017-01-01

    MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis....... This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set...... analysis methods. We apply the model to show that the popular GSEA method, which recently has been claimed to test the self-contained null hypothesis, actually tests the competitive null if the weight parameter is zero. However, for this result to hold strictly, the choice of the dependence measures...

  20. Sex and Class Differences in Parent-Child Interaction: A Test of Kohn's Hypothesis

    Science.gov (United States)

    Gecas, Viktor; Nye, F. Ivan

    1974-01-01

    This paper focuses on Melvin Kohn's suggestive hypothesis that white-collar parents stress the development of internal standards of conduct in their children while blue-collar parents are more likely to react on the basis of the consequences of the child's behavior. This hypothesis was supported. (Author)

  1. Hypothesis-driven methods to augment human cognition by optimizing cortical oscillations

    Directory of Open Access Journals (Sweden)

    Jörn M. Horschig

    2014-06-01

    Full Text Available Cortical oscillations have been shown to represent fundamental functions of a working brain, e.g. communication, stimulus binding, error monitoring, and inhibition, and are directly linked to behavior. Recent studies intervening with these oscillations have demonstrated effective modulation of both the oscillations and behavior. In this review, we collect evidence in favor of how hypothesis-driven methods can be used to augment cognition by optimizing cortical oscillations. We elaborate their potential usefulness for three target groups: healthy elderly, patients with attention deficit/hyperactivity disorder, and healthy young adults. We discuss the relevance of neuronal oscillations in each group and show how each of them can benefit from the manipulation of functionally-related oscillations. Further, we describe methods for manipulation of neuronal oscillations including direct brain stimulation as well as indirect task alterations. We also discuss practical considerations about the proposed techniques. In conclusion, we propose that insights from neuroscience should guide techniques to augment human cognition, which in turn can provide a better understanding of how the human brain works.

  2. A Global comparison of surface soil characteristics across five cities: A test of the urban ecosystem convergence hypothesis.

    Science.gov (United States)

    Richard V. Pouyat; Ian D. Yesilonis; Miklos Dombos; Katalin Szlavecz; Heikki Setala; Sarel Cilliers; Erzsebet Hornung; D. Johan Kotze; Stephanie Yarwood

    2015-01-01

    As part of the Global Urban Soil Ecology and Education Network and to test the urban ecosystem convergence hypothesis, we report on soil pH, organic carbon (OC), total nitrogen (TN), phosphorus (P), and potassium (K) measured in four soil habitat types (turfgrass, ruderal, remnant, and reference) in five metropolitan areas (Baltimore, Budapest,...

  3. Colour vision in ADHD: part 1--testing the retinal dopaminergic hypothesis.

    Science.gov (United States)

    Kim, Soyeon; Al-Haj, Mohamed; Chen, Samantha; Fuller, Stuart; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary

    2014-10-24

    To test the retinal dopaminergic hypothesis, which posits deficient blue color perception in ADHD, resulting from hypofunctioning CNS and retinal dopamine, to which blue cones are exquisitely sensitive. Also, purported sex differences in red color perception were explored. 30 young adults diagnosed with ADHD and 30 healthy young adults, matched on age and gender, performed a psychophysical task to measure blue and red color saturation and contrast discrimination ability. Visual function measures, such as the Visual Activities Questionnaire (VAQ) and Farnsworth-Munsell 100 hue test (FMT), were also administered. Females with ADHD were less accurate in discriminating blue and red color saturation relative to controls but did not differ in contrast sensitivity. Female control participants were better at discriminating red saturation than males, but no sex difference was present within the ADHD group. Poorer discrimination of red as well as blue color saturation in the female ADHD group may be partly attributable to a hypo-dopaminergic state in the retina, given that color perception (blue-yellow and red-green) is based on input from S-cones (short wavelength cone system) early in the visual pathway. The origin of female superiority in red perception may be rooted in sex-specific functional specialization in hunter-gather societies. The absence of this sexual dimorphism for red colour perception in ADHD females warrants further investigation.

  4. Proform-Antecedent Linking in Individuals with Agrammatic Aphasia: A Test of the Intervener Hypothesis.

    Science.gov (United States)

    Engel, Samantha; Shapiro, Lewis P; Love, Tracy

    2018-02-01

    To evaluate processing and comprehension of pronouns and reflexives in individuals with agrammatic (Broca's) aphasia and age-matched control participants. Specifically, we evaluate processing and comprehension patterns in terms of a specific hypothesis -- the Intervener Hypothesis - that posits that the difficulty of individuals with agrammatic (Broca's) aphasia results from similarity-based interference caused by the presence of an intervening NP between two elements of a dependency chain. We used an eye tracking-while-listening paradigm to investigate real-time processing (Experiment 1) and a sentence-picture matching task to investigate final interpretive comprehension (Experiment 2) of sentences containing proforms in complement phrase and subject relative constructions. Individuals with agrammatic aphasia demonstrated a greater proportion of gazes to the correct referent of reflexives relative to pronouns and significantly greater comprehension accuracy of reflexives relative to pronouns. These results provide support for the Intervener Hypothesis, previous support for which comes from studies of Wh- questions and unaccusative verbs, and we argue that this account provides an explanation for the deficits of individuals with agrammatic aphasia across a growing set of sentence constructions. The current study extends this hypothesis beyond filler-gap dependencies to referential dependencies and allows us to refine the hypothesis in terms of the structural constraints that meet the description of the Intervener Hypothesis.

  5. Testing the hypothesis of the natural suicide rates: Further evidence from OECD data

    DEFF Research Database (Denmark)

    Andres, Antonio Rodriguez; Halicioglu, Ferda

    2011-01-01

    This paper provides further evidence on the hypothesis of the natural rate of suicide using the time series data for 15 OECD countries over the period 1970–2004. This hypothesis suggests that the suicide rate of a society could never be zero even if both the economic and the social conditions wer...

  6. Energy prices, multiple structural breaks, and efficient market hypothesis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chien-Chiang; Lee, Jun-De [Department of Applied Economics, National Chung Hsing University, Taichung (China)

    2009-04-15

    This paper investigates the efficient market hypothesis using total energy price and four kinds of various disaggregated energy prices - coal, oil, gas, and electricity - for OECD countries over the period 1978-2006. We employ a highly flexible panel data stationarity test of Carrion-i-Silvestre et al. [Carrion-i-Silvestre JL, Del Barrio-Castro T, Lopez-Bazo E. Breaking the panels: an application to GDP per capita. J Econometrics 2005;8:159-75], which incorporates multiple shifts in level and slope, thereby controlling for cross-sectional dependence through bootstrap methods. Overwhelming evidence in favor of the broken stationarity hypothesis is found, implying that energy prices are not characterized by an efficient market. Thus, it shows the presence of profitable arbitrage opportunities among energy prices. The estimated breaks are meaningful and coincide with the most critical events which affected the energy prices. (author)

  7. Energy prices, multiple structural breaks, and efficient market hypothesis

    International Nuclear Information System (INIS)

    Lee, Chien-Chiang; Lee, Jun-De

    2009-01-01

    This paper investigates the efficient market hypothesis using total energy price and four kinds of various disaggregated energy prices - coal, oil, gas, and electricity - for OECD countries over the period 1978-2006. We employ a highly flexible panel data stationarity test of Carrion-i-Silvestre et al. [Carrion-i-Silvestre JL, Del Barrio-Castro T, Lopez-Bazo E. Breaking the panels: an application to GDP per capita. J Econometrics 2005;8:159-75], which incorporates multiple shifts in level and slope, thereby controlling for cross-sectional dependence through bootstrap methods. Overwhelming evidence in favor of the broken stationarity hypothesis is found, implying that energy prices are not characterized by an efficient market. Thus, it shows the presence of profitable arbitrage opportunities among energy prices. The estimated breaks are meaningful and coincide with the most critical events which affected the energy prices. (author)

  8. Universality hypothesis breakdown at one-loop order

    Science.gov (United States)

    Carvalho, P. R. S.

    2018-05-01

    We probe the universality hypothesis by analytically computing the at least two-loop corrections to the critical exponents for q -deformed O (N ) self-interacting λ ϕ4 scalar field theories through six distinct and independent field-theoretic renormalization group methods and ɛ -expansion techniques. We show that the effect of q deformation on the one-loop corrections to the q -deformed critical exponents is null, so the universality hypothesis is broken down at this loop order. Such an effect emerges only at the two-loop and higher levels, and the validity of the universality hypothesis is restored. The q -deformed critical exponents obtained through the six methods are the same and, furthermore, reduce to their nondeformed values in the appropriated limit.

  9. Testing the environmental Kuznets curve hypothesis with bird populations as habitat-specific environmental indicators: evidence from Canada.

    Science.gov (United States)

    Lantz, Van; Martínez-Espiñeira, Roberto

    2008-04-01

    The traditional environmental Kuznets curve (EKC) hypothesis postulates that environmental degradation follows an inverted U-shaped relationship with gross domestic product (GDP) per capita. We tested the EKC hypothesis with bird populations in 5 different habitats as environmental quality indicators. Because birds are considered environmental goods, for them the EKC hypothesis would instead be associated with a U-shaped relationship between bird populations and GDP per capita. In keeping with the literature, we included other variables in the analysis-namely, human population density and time index variables (the latter variable captured the impact of persistent and exogenous climate and/or policy changes on bird populations over time). Using data from 9 Canadian provinces gathered over 37 years, we used a generalized least-squares regression for each bird habitat type, which accounted for the panel structure of the data, the cross-sectional dependence across provinces in the residuals, heteroskedasticity, and fixed- or random-effect specifications of the models. We found evidence that supports the EKC hypothesis for 3 of the 5 bird population habitat types. In addition, the relationship between human population density and the different bird populations varied, which emphasizes the complex nature of the impact that human populations have on the environment. The relationship between the time-index variable and the different bird populations also varied, which indicates there are other persistent and significant influences on bird populations over time. Overall our EKC results were consistent with those found for threatened bird species, indicating that economic prosperity does indeed act to benefit some bird populations.

  10. An experimental test of the habitat-amount hypothesis for saproxylic beetles in a forested region.

    Science.gov (United States)

    Seibold, Sebastian; Bässler, Claus; Brandl, Roland; Fahrig, Lenore; Förster, Bernhard; Heurich, Marco; Hothorn, Torsten; Scheipl, Fabian; Thorn, Simon; Müller, Jörg

    2017-06-01

    The habitat-amount hypothesis challenges traditional concepts that explain species richness within habitats, such as the habitat-patch hypothesis, where species number is a function of patch size and patch isolation. It posits that effects of patch size and patch isolation are driven by effects of sample area, and thus that the number of species at a site is basically a function of the total habitat amount surrounding this site. We tested the habitat-amount hypothesis for saproxylic beetles and their habitat of dead wood by using an experiment comprising 190 plots with manipulated patch sizes situated in a forested region with a high variation in habitat amount (i.e., density of dead trees in the surrounding landscape). Although dead wood is a spatio-temporally dynamic habitat, saproxylic insects have life cycles shorter than the time needed for habitat turnover and they closely track their resource. Patch size was manipulated by adding various amounts of downed dead wood to the plots (~800 m³ in total); dead trees in the surrounding landscape (~240 km 2 ) were identified using airborne laser scanning (light detection and ranging). Over 3 yr, 477 saproxylic species (101,416 individuals) were recorded. Considering 20-1,000 m radii around the patches, local landscapes were identified as having a radius of 40-120 m. Both patch size and habitat amount in the local landscapes independently affected species numbers without a significant interaction effect, hence refuting the island effect. Species accumulation curves relative to cumulative patch size were not consistent with either the habitat-patch hypothesis or the habitat-amount hypothesis: several small dead-wood patches held more species than a single large patch with an amount of dead wood equal to the sum of that of the small patches. Our results indicate that conservation of saproxylic beetles in forested regions should primarily focus on increasing the overall amount of dead wood without considering its

  11. Multiple hypothesis tracking for the cyber domain

    Science.gov (United States)

    Schwoegler, Stefan; Blackman, Sam; Holsopple, Jared; Hirsch, Michael J.

    2011-09-01

    This paper discusses how methods used for conventional multiple hypothesis tracking (MHT) can be extended to domain-agnostic tracking of entities from non-kinematic constraints such as those imposed by cyber attacks in a potentially dense false alarm background. MHT is widely recognized as the premier method to avoid corrupting tracks with spurious data in the kinematic domain but it has not been extensively applied to other problem domains. The traditional approach is to tightly couple track maintenance (prediction, gating, filtering, probabilistic pruning, and target confirmation) with hypothesis management (clustering, incompatibility maintenance, hypothesis formation, and Nassociation pruning). However, by separating the domain specific track maintenance portion from the domain agnostic hypothesis management piece, we can begin to apply the wealth of knowledge gained from ground and air tracking solutions to the cyber (and other) domains. These realizations led to the creation of Raytheon's Multiple Hypothesis Extensible Tracking Architecture (MHETA). In this paper, we showcase MHETA for the cyber domain, plugging in a well established method, CUBRC's INFormation Engine for Real-time Decision making, (INFERD), for the association portion of the MHT. The result is a CyberMHT. We demonstrate the power of MHETA-INFERD using simulated data. Using metrics from both the tracking and cyber domains, we show that while no tracker is perfect, by applying MHETA-INFERD, advanced nonkinematic tracks can be captured in an automated way, perform better than non-MHT approaches, and decrease analyst response time to cyber threats.

  12. A Blind Test of the Younger Dryas Impact Hypothesis.

    Directory of Open Access Journals (Sweden)

    Vance Holliday

    Full Text Available The Younger Dryas Impact Hypothesis (YDIH states that North America was devastated by some sort of extraterrestrial event ~12,800 calendar years before present. Two fundamental questions persist in the debate over the YDIH: Can the results of analyses for purported impact indicators be reproduced? And are the indicators unique to the lower YD boundary (YDB, i.e., ~12.8k cal yrs BP? A test reported here presents the results of analyses that address these questions. Two different labs analyzed identical splits of samples collected at, above, and below the ~12.8ka zone at the Lubbock Lake archaeological site (LL in northwest Texas. Both labs reported similar variation in levels of magnetic micrograins (>300 mg/kg >12.8ka and <11.5ka, but <150 mg/kg 12.8ka to 11.5ka. Analysis for magnetic microspheres in one split, reported elsewhere, produced very low to nonexistent levels throughout the section. In the other split, reported here, the levels of magnetic microspherules and nanodiamonds are low or nonexistent at, below, and above the YDB with the notable exception of a sample <11,500 cal years old. In that sample the claimed impact proxies were recovered at abundances two to four orders of magnitude above that from the other samples. Reproducibility of at least some analyses are problematic. In particular, no standard criteria exist for identification of magnetic spheres. Moreover, the purported impact proxies are not unique to the YDB.

  13. A test of the massive binary black hole hypothesis - Arp 102B

    Science.gov (United States)

    Helpern, J. P.; Filippenko, Alexei V.

    1988-01-01

    The emission-line spectra of several AGN have broad peaks which are significantly displaced in velocity with respect to the host galaxy. An interpretation of this effect in terms of orbital motion of a binary black hole predicts periods of a few centuries. It is pointed out here that recent measurements of the masses and sizes of many low-luminosity AGN imply orbital periods much shorter than this. In particular, it is found that the elliptical galaxy Arp 102B is the most likely candidate for observation of radial velocity variations; its period is expected to be about 3 yr. The H-alpha line profile of Arp 102B has been measured for 5 yr without detecting any change in velocity, and it is thus found that a rather restrictive observational test of the massive binary black hole hypothesis already exists, albeit for this one object.

  14. Addendum to the article: Misuse of null hypothesis significance testing: Would estimation of positive and negative predictive values improve certainty of chemical risk assessment?

    Science.gov (United States)

    Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf

    2015-03-01

    We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.

  15. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  16. Testing the Hypothesis of Biofilm as a Source for Soft Tissue and Cell-Like Structures Preserved in Dinosaur Bone

    Science.gov (United States)

    2016-01-01

    Recovery of still-soft tissue structures, including blood vessels and osteocytes, from dinosaur bone after demineralization was reported in 2005 and in subsequent publications. Despite multiple lines of evidence supporting an endogenous source, it was proposed that these structures arose from contamination from biofilm-forming organisms. To test the hypothesis that soft tissue structures result from microbial invasion of the fossil bone, we used two different biofilm-forming microorganisms to inoculate modern bone fragments from which organic components had been removed. We show fundamental morphological, chemical and textural differences between the resultant biofilm structures and those derived from dinosaur bone. The data do not support the hypothesis that biofilm-forming microorganisms are the source of these structures. PMID:26926069

  17. Are only infants held more often on the left? If so, why? Testing the attention-emotion hypothesis with an infant, a vase, and two chimeric tests, one "emotional," one not.

    Science.gov (United States)

    Harris, Lauren Julius; Cárdenas, Rodrigo A; Stewart, Nathaniel D; Almerigi, Jason B

    2018-05-16

    Most adults, especially women, hold infants and dolls but not books or packages on the left side. One reason may be that attention is more often leftward in response to infants, unlike emotionally neutral objects like books and packages. Women's stronger bias may reflect greater responsiveness to infants. Previously, we tested the attention hypothesis by comparing women's side-of-hold of a doll, book, and package with direction-of-attention on the Chimeric Faces Test (CFT) [Harris, L. J., Cárdenas, R. A., Spradlin, Jr., M. P., & Almerigi, J. B. (2010). Why are infants held on the left? A test of the attention hypothesis with a doll, a book, and a bag. Laterality: Asymmetries of Body, Brain and Cognition, 15(5), 548-571. doi: 10.1080/13576500903064018 ]. Only the doll was held more often to the left, and only for the doll were side-of-hold and CFT scores related, with left-holders showing a stronger left-attention bias than right-holders. In the current study, we tested men and women with a doll and the CFT along with a vase as a neutral object and a "non-emotional" chimeric test. Again, only the doll was held more often to the left, but now, although both chimeric tests showed left-attention biases, scores were unrelated to side-of-hold. Nor were there sex differences. The results support left-hold selectivity but not the attention hypothesis, with or without the element of emotion. They also raise questions about the contribution of sex-of-holder. We conclude with suggestions for addressing these issues.

  18. Stock returns predictability and the adaptive market hypothesis in emerging markets: evidence from India.

    Science.gov (United States)

    Hiremath, Gourishankar S; Kumari, Jyoti

    2014-01-01

    This study addresses the question of whether the adaptive market hypothesis provides a better description of the behaviour of emerging stock market like India. We employed linear and nonlinear methods to evaluate the hypothesis empirically. The linear tests show a cyclical pattern in linear dependence suggesting that the Indian stock market switched between periods of efficiency and inefficiency. In contrast, the results from nonlinear tests reveal a strong evidence of nonlinearity in returns throughout the sample period with a sign of tapering magnitude of nonlinear dependence in the recent period. The findings suggest that Indian stock market is moving towards efficiency. The results provide additional insights on association between financial crises, foreign portfolio investments and inefficiency. G14; G12; C12.

  19. To test or not to test

    DEFF Research Database (Denmark)

    Rochon, Justine; Gondan, Matthias; Kieser, Meinhard

    2012-01-01

    Background: Student's two-sample t test is generally used for comparing the means of two independent samples, for example, two treatment arms. Under the null hypothesis, the t test assumes that the two samples arise from the same normally distributed population with unknown variance. Adequate...... control of the Type I error requires that the normality assumption holds, which is often examined by means of a preliminary Shapiro-Wilk test. The following two-stage procedure is widely accepted: If the preliminary test for normality is not significant, the t test is used; if the preliminary test rejects...... the null hypothesis of normality, a nonparametric test is applied in the main analysis. Methods: Equally sized samples were drawn from exponential, uniform, and normal distributions. The two-sample t test was conducted if either both samples (Strategy I) or the collapsed set of residuals from both samples...

  20. Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis

    OpenAIRE

    Fischer, Agneta H.; Kret, Mariska E.; Broekens, Joost

    2018-01-01

    Previous meta-analyses and reviews on gender differences in emotion recognition have shown a small to moderate female advantage. However, inconsistent evidence from recent studies has raised questions regarding the implications of different methodologies, stimuli, and samples. In the present research based on a community sample of more than 5000 participants, we tested the emotional sensitivity hypothesis, stating that women are more sensitive to perceive subtle, i.e. low intense or ambiguous...

  1. Simplified Freeman-Tukey test statistics for testing probabilities in ...

    African Journals Online (AJOL)

    This paper presents the simplified version of the Freeman-Tukey test statistic for testing hypothesis about multinomial probabilities in one, two and multidimensional contingency tables that does not require calculating the expected cell frequencies before test of significance. The simplified method established new criteria of ...

  2. Testing the niche variation hypothesis with a measure of body condition

    Science.gov (United States)

    Individual variation and fitness are cornerstones of evolution by natural selection. The niche variation hypothesis (NVH) posits that when interspecific competition is relaxed, intraspecific competition should drive niche expansion by selection favoring use of novel resources. Po...

  3. Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

    Science.gov (United States)

    Reiss, Philip T

    2015-08-01

    The "ten ironic rules for statistical reviewers" presented by Friston (2012) prompted a rebuttal by Lindquist et al. (2013), which was followed by a rejoinder by Friston (2013). A key issue left unresolved in this discussion is the use of cross-validation to test the significance of predictive analyses. This note discusses the role that cross-validation-based and related hypothesis tests have come to play in modern data analyses, in neuroimaging and other fields. It is shown that such tests need not be suboptimal and can fill otherwise-unmet inferential needs. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. A Hypothesis-Driven Approach to Site Investigation

    Science.gov (United States)

    Nowak, W.

    2008-12-01

    Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle

  5. Is Variability in Mate Choice Similar for Intelligence and Personality Traits? Testing a Hypothesis about the Evolutionary Genetics of Personality

    Science.gov (United States)

    Stone, Emily A.; Shackelford, Todd K.; Buss, David M.

    2012-01-01

    This study tests the hypothesis presented by Penke, Denissen, and Miller (2007a) that condition-dependent traits, including intelligence, attractiveness, and health, are universally and uniformly preferred as characteristics in a mate relative to traits that are less indicative of condition, including personality traits. We analyzed…

  6. Testing the Hypothesis of Biofilm as a Source for Soft Tissue and Cell-Like Structures Preserved in Dinosaur Bone.

    Directory of Open Access Journals (Sweden)

    Mary Higby Schweitzer

    Full Text Available Recovery of still-soft tissue structures, including blood vessels and osteocytes, from dinosaur bone after demineralization was reported in 2005 and in subsequent publications. Despite multiple lines of evidence supporting an endogenous source, it was proposed that these structures arose from contamination from biofilm-forming organisms. To test the hypothesis that soft tissue structures result from microbial invasion of the fossil bone, we used two different biofilm-forming microorganisms to inoculate modern bone fragments from which organic components had been removed. We show fundamental morphological, chemical and textural differences between the resultant biofilm structures and those derived from dinosaur bone. The data do not support the hypothesis that biofilm-forming microorganisms are the source of these structures.

  7. On the Flexibility of Social Source Memory: A Test of the Emotional Incongruity Hypothesis

    Science.gov (United States)

    Bell, Raoul; Buchner, Axel; Kroneisen, Meike; Giang, Trang

    2012-01-01

    A popular hypothesis in evolutionary psychology posits that reciprocal altruism is supported by a cognitive module that helps cooperative individuals to detect and remember cheaters. Consistent with this hypothesis, a source memory advantage for faces of cheaters (better memory for the cheating context in which these faces were encountered) was…

  8. Received social support and exercising: An intervention study to test the enabling hypothesis.

    Science.gov (United States)

    Rackow, Pamela; Scholz, Urte; Hornung, Rainer

    2015-11-01

    Received social support is considered important for health-enhancing exercise participation. The enabling hypothesis of social support suggests an indirect association of social support and exercising via constructs of self-regulation, such as self-efficacy. This study aimed at examining an expanded enabling hypothesis by examining effects of different kinds of social support (i.e., emotional and instrumental) on exercising not only via self-efficacy but also via self-monitoring and action planning. An 8-week online study was conducted. Participants were randomly assigned to an intervention or a control group. The intervention comprised finding and then exercising regularly with a new exercise companion. Intervention and control group effects were compared by a manifest multigroup model. Received emotional social support predicted self-efficacy, self-monitoring, and action planning in the intervention group. Moreover, received emotional social support was indirectly connected with exercise via the examined mediators. The indirect effect from received emotional social support via self-efficacy mainly contributed to the total effect. No direct or indirect effect of received instrumental social support on exercise emerged. In the control group, neither emotional nor instrumental social support was associated with any of the self-regulation constructs nor with exercise. Actively looking for a new exercise companion and exercising together seems to be beneficial for the promotion of received emotional and instrumental social support. Emotional support in turn promotes exercise by enabling better self-regulation, in particular self-efficacy. Statement of contribution What is already known on this subject? With the 'enabling hypothesis', Benight and Bandura (2004, Behav. Res. Ther., 42, 1129) claimed that social support indirectly affects behaviour via self-efficacy. Research in the domain of physical exercise has provided evidence for this enabling hypothesis on a

  9. Assess the Critical Period Hypothesis in Second Language Acquisition

    Science.gov (United States)

    Du, Lihong

    2010-01-01

    The Critical Period Hypothesis aims to investigate the reason for significant difference between first language acquisition and second language acquisition. Over the past few decades, researchers carried out a series of studies to test the validity of the hypothesis. Although there were certain limitations in these studies, most of their results…

  10. Testing the Cuckoldry Risk Hypothesis of Partner Sexual Coercion in Community and Forensic Samples

    Directory of Open Access Journals (Sweden)

    Joseph A. Camilleri

    2009-04-01

    Full Text Available Evolutionary theory has informed the investigation of male sexual coercion but has seldom been applied to the analysis of sexual coercion within established couples. The cuckoldry risk hypothesis, that sexual coercion is a male tactic used to reduce the risk of extrapair paternity, was tested in two studies. In a community sample, indirect cues of infidelity predicted male propensity for sexual coaxing in the relationship, and direct cues predicted propensity for sexual coercion. In the forensic sample, we found that most partner rapists experienced cuckoldry risk prior to committing their offence and experienced more types of cuckoldry risk events than non-sexual partner assaulters. These findings suggest that cuckoldry risk influences male sexual coercion in established sexual relationships.

  11. [Working memory, phonological awareness and spelling hypothesis].

    Science.gov (United States)

    Gindri, Gigiane; Keske-Soares, Márcia; Mota, Helena Bolli

    2007-01-01

    Working memory, phonological awareness and spelling hypothesis. To verify the relationship between working memory, phonological awareness and spelling hypothesis in pre-school children and first graders. Participants of this study were 90 students, belonging to state schools, who presented typical linguistic development. Forty students were preschoolers, with the average age of six and 50 students were first graders, with the average age of seven. Participants were submitted to an evaluation of the working memory abilities based on the Working Memory Model (Baddeley, 2000), involving phonological loop. Phonological loop was evaluated using the Auditory Sequential Test, subtest 5 of Illinois Test of Psycholinguistic Abilities (ITPA), Brazilian version (Bogossian & Santos, 1977), and the Meaningless Words Memory Test (Kessler, 1997). Phonological awareness abilities were investigated using the Phonological Awareness: Instrument of Sequential Assessment (CONFIAS - Moojen et al., 2003), involving syllabic and phonemic awareness tasks. Writing was characterized according to Ferreiro & Teberosky (1999). Preschoolers presented the ability of repeating sequences of 4.80 digits and 4.30 syllables. Regarding phonological awareness, the performance in the syllabic level was of 19.68 and in the phonemic level was of 8.58. Most of the preschoolers demonstrated to have a pre-syllabic writing hypothesis. First graders repeated, in average, sequences of 5.06 digits and 4.56 syllables. These children presented a phonological awareness of 31.12 in the syllabic level and of 16.18 in the phonemic level, and demonstrated to have an alphabetic writing hypothesis. The performance of working memory, phonological awareness and spelling level are inter-related, as well as being related to chronological age, development and scholarity.

  12. Motivation in vigilance - A test of the goal-setting hypothesis of the effectiveness of knowledge of results.

    Science.gov (United States)

    Warm, J. S.; Riechmann, S. W.; Grasha, A. F.; Seibel, B.

    1973-01-01

    This study tested the prediction, derived from the goal-setting hypothesis, that the facilitating effects of knowledge of results (KR) in a simple vigilance task should be related directly to the level of the performance standard used to regulate KR. Two groups of Ss received dichotomous KR in terms of whether Ss response times (RTs) to signal detections exceeded a high or low standard of performance. The aperiodic offset of a visual signal was the critical event for detection. The vigil was divided into a training phase followed by testing, during which KR was withdrawn. Knowledge of results enhanced performance in both phases. However, the two standards used to regulate feedback contributed little to these effects.

  13. The Influence of Maternal Acculturation, Neighborhood Disadvantage, and Parenting on Chinese American Adolescents' Conduct Problems: Testing the Segmented Assimilation Hypothesis

    Science.gov (United States)

    Liu, Lisa L.; Lau, Anna S.; Chen, Angela Chia-Chen; Dinh, Khanh T.; Kim, Su Yeong

    2009-01-01

    Associations among neighborhood disadvantage, maternal acculturation, parenting and conduct problems were investigated in a sample of 444 Chinese American adolescents. Adolescents (54% female, 46% male) ranged from 12 to 15 years of age (mean age = 13.0 years). Multilevel modeling was employed to test the hypothesis that the association between…

  14. Standard test method for creep-fatigue testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the determination of mechanical properties pertaining to creep-fatigue deformation or crack formation in nominally homogeneous materials, or both by the use of test specimens subjected to uniaxial forces under isothermal conditions. It concerns fatigue testing at strain rates or with cycles involving sufficiently long hold times to be responsible for the cyclic deformation response and cycles to crack formation to be affected by creep (and oxidation). It is intended as a test method for fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. The cyclic conditions responsible for creep-fatigue deformation and cracking vary with material and with temperature for a given material. 1.2 The use of this test method is limited to specimens and does not cover testing of full-scale components, structures, or consumer products. 1.3 This test method is primarily ...

  15. Using Saccharomyces cerevisiae to Test the Mutagenicity of Household Compounds: An Open Ended Hypothesis-Driven Teaching Lab

    OpenAIRE

    Marshall, Pamela A.

    2007-01-01

    In our Fundamentals of Genetics lab, students perform a wide variety of labs to reinforce and extend the topics covered in lecture. I developed an active-learning lab to augment the lecture topic of mutagenesis. In this lab exercise, students determine if a compound they bring from home is a mutagen. Students are required to read extensive background material, perform research to find a potential mutagen to test, develop a hypothesis, and bring to the lab their own suspected mutagen. This lab...

  16. Testing the carotenoid trade-off hypothesis in the polychromatic Midas cichlid, Amphilophus citrinellus.

    Science.gov (United States)

    Lin, Susan M; Nieves-Puigdoller, Katherine; Brown, Alexandria C; McGraw, Kevin J; Clotfelter, Ethan D

    2010-01-01

    Many animals use carotenoid pigments derived from their diet for coloration and immunity. The carotenoid trade-off hypothesis predicts that, under conditions of carotenoid scarcity, individuals may be forced to allocate limited carotenoids to either coloration or immunity. In polychromatic species, the pattern of allocation may differ among individuals. We tested the carotenoid trade-off hypothesis in the Midas cichlid, Amphilophus citrinellus, a species with two ontogenetic color morphs, barred and gold, the latter of which is the result of carotenoid expression. We performed a diet-supplementation experiment in which cichlids of both color morphs were assigned to one of two diet treatments that differed only in carotenoid content (beta-carotene, lutein, and zeaxanthin). We measured integument color using spectrometry, quantified carotenoid concentrations in tissue and plasma, and assessed innate immunity using lysozyme activity and alternative complement pathway assays. In both color morphs, dietary carotenoid supplementation elevated plasma carotenoid circulation but failed to affect skin coloration. Consistent with observable differences in integument coloration, we found that gold fish sequestered more carotenoids in skin tissue than barred fish, but barred fish had higher concentrations of carotenoids in plasma than gold fish. Neither measure of innate immunity differed between gold and barred fish, or as a function of dietary carotenoid supplementation. Lysozyme activity, but not complement activity, was strongly affected by body condition. Our data show that a diet low in carotenoids is sufficient to maintain both coloration and innate immunity in Midas cichlids. Our data also suggest that the developmental transition from the barred to gold morph is not accompanied by a decrease in innate immunity in this species.

  17. What Constitutes Science and Scientific Evidence: Roles of Null Hypothesis Testing

    Science.gov (United States)

    Chang, Mark

    2017-01-01

    We briefly discuss the philosophical basis of science, causality, and scientific evidence, by introducing the hidden but most fundamental principle of science: the similarity principle. The principle's use in scientific discovery is illustrated with Simpson's paradox and other examples. In discussing the value of null hypothesis statistical…

  18. Testing Significance Testing

    Directory of Open Access Journals (Sweden)

    Joachim I. Krueger

    2018-04-01

    Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.

  19. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. A novel hypothesis on the sensitivity of the fecal occult blood test: Results of a joint analysis of 3 randomized controlled trials.

    Science.gov (United States)

    Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F

    2009-06-01

    Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.

  1. Biorhythms, deciduous enamel thickness, and primary bone growth: a test of the Havers-Halberg Oscillation hypothesis.

    Science.gov (United States)

    Mahoney, Patrick; Miszkiewicz, Justyna J; Pitfield, Rosie; Schlecht, Stephen H; Deter, Chris; Guatelli-Steinberg, Debbie

    2016-06-01

    Across mammalian species, the periodicity with which enamel layers form (Retzius periodicity) in permanent teeth corresponds with average body mass and the pace of life history. According to the Havers-Halberg Oscillation hypothesis (HHO), Retzius periodicity (RP) is a manifestation of a biorhythm that is also expressed in lamellar bone. Potentially, these links provide a basis for investigating aspects of a species' biology from fossilized teeth. Here, we tested intra-specific predictions of this hypothesis on skeletal samples of human juveniles. We measured daily enamel growth increments to calculate RP in deciduous molars (n = 25). Correlations were sought between RP, molar average and relative enamel thickness (AET, RET), and the average amount of primary bone growth (n = 7) in humeri of age-matched juveniles. Results show a previously undescribed relationship between RP and enamel thickness. Reduced major axis regression reveals RP is significantly and positively correlated with AET and RET, and scales isometrically. The direction of the correlation was opposite to HHO predictions as currently understood for human adults. Juveniles with higher RPs and thicker enamel had increased primary bone formation, which suggests a coordinating biorhythm. However, the direction of the correspondence was, again, opposite to predictions. Next, we compared RP from deciduous molars with new data for permanent molars, and with previously published values. The lowermost RP of 4 and 5 days in deciduous enamel extends below the lowermost RP of 6 days in permanent enamel. A lowered range of RP values in deciduous enamel implies that the underlying biorhythm might change with age. Our results develop the intra-specific HHO hypothesis. © 2016 Anatomical Society.

  2. Perceived message sensation value and psychological reactance: a test of the dominant thought disruption hypothesis.

    Science.gov (United States)

    Quick, Brian L

    2013-01-01

    The present study tests to see whether perceived message sensation value reduces psychological reactance within the context of anti-marijuana ads for television. After controlling for sensation seeking, biological sex, and marijuana use, the results indicate that message novelty is negatively associated with a freedom threat, whereas dramatic impact and emotional arousal were not associated with the antecedent to reactance. Results support the use of novel messages in future ads while at the same time offer an explanation to the challenges involved in creating effective anti-marijuana ads. Overall, the results provide partial support for the dominant thought disruption hypothesis and are discussed with an emphasis on the theoretical and practical implications for health communication researchers and practitioners.

  3. Testing the relativistic Doppler boost hypothesis for supermassive black hole binary candidates

    Science.gov (United States)

    Charisi, Maria; Haiman, Zoltán; Schiminovich, David; D'Orazio, Daniel J.

    2018-06-01

    Supermassive black hole binaries (SMBHBs) should be common in galactic nuclei as a result of frequent galaxy mergers. Recently, a large sample of sub-parsec SMBHB candidates was identified as bright periodically variable quasars in optical surveys. If the observed periodicity corresponds to the redshifted binary orbital period, the inferred orbital velocities are relativistic (v/c ≈ 0.1). The optical and ultraviolet (UV) luminosities are expected to arise from gas bound to the individual BHs, and would be modulated by the relativistic Doppler effect. The optical and UV light curves should vary in tandem with relative amplitudes which depend on the respective spectral slopes. We constructed a control sample of 42 quasars with aperiodic variability, to test whether this Doppler colour signature can be distinguished from intrinsic chromatic variability. We found that the Doppler signature can arise by chance in ˜20 per cent (˜37 per cent) of quasars in the nUV (fUV) band. These probabilities reflect the limited quality of the control sample and represent upper limits on how frequently quasars mimic the Doppler brightness+colour variations. We performed separate tests on the periodic quasar candidates, and found that for the majority, the Doppler boost hypothesis requires an unusually steep UV spectrum or an unexpectedly large BH mass and orbital velocity. We conclude that at most approximately one-third of these periodic candidates can harbor Doppler-modulated SMBHBs.

  4. Exploration of method determining hydrogeologic parameters of low permeability sandstone uranium deposits

    International Nuclear Information System (INIS)

    Ji Hongbin; Wu Liwu; Cao Zhen

    2012-01-01

    A hypothesis of regarding injecting test as 'anti-pumping' test is presented, and pumping test's 'match line method' is used to process data of injecting test. Accurate hydrogeologic parameters can be obtained by injecting test in the sandstone uranium deposits with low permeability and small pumping volume. Taking injecting test in a uranium deposit of Xinjiang for example, the hydrogeologic parameters of main ore-bearing aquifer were calculated by using the 'anti-pumping' hypothesis. Results calculated by the 'anti-pumping' hypothesis were compared with results calculated by water level recovery method. The results show that it is feasible to use 'anti-pumping' hypothesis to calculate the hydrogeologic parameters of main ore-bearing aquifer. (authors)

  5. TESTING THE HYPOTHESIS THAT METHANOL MASER RINGS TRACE CIRCUMSTELLAR DISKS: HIGH-RESOLUTION NEAR-INFRARED AND MID-INFRARED IMAGING

    International Nuclear Information System (INIS)

    De Buizer, James M.; Bartkiewicz, Anna; Szymczak, Marian

    2012-01-01

    Milliarcsecond very long baseline interferometry maps of regions containing 6.7 GHz methanol maser emission have lead to the recent discovery of ring-like distributions of maser spots and the plausible hypothesis that they may be tracing circumstellar disks around forming high-mass stars. We aimed to test this hypothesis by imaging these regions in the near- and mid-infrared at high spatial resolution and compare the observed emission to the expected infrared morphologies as inferred from the geometries of the maser rings. In the near-infrared we used the Gemini North adaptive optics system of ALTAIR/NIRI, while in the mid-infrared we used the combination of the Gemini South instrument T-ReCS and super-resolution techniques. Resultant images had a resolution of ∼150 mas in both the near-infrared and mid-infrared. We discuss the expected distribution of circumstellar material around young and massive accreting (proto)stars and what infrared emission geometries would be expected for the different maser ring orientations under the assumption that the masers are coming from within circumstellar disks. Based upon the observed infrared emission geometries for the four targets in our sample and the results of spectral energy distribution modeling of the massive young stellar objects associated with the maser rings, we do not find compelling evidence in support of the hypothesis that methanol masers rings reside in circumstellar disks.

  6. Testing the lexical hypothesis: are socially important traits more densely reflected in the English lexicon?

    Science.gov (United States)

    Wood, Dustin

    2015-02-01

    Using a set of 498 English words identified by Saucier (1997) as common person-descriptor adjectives or trait terms, I tested 3 instantiations of the lexical hypothesis, which posit that more socially important person descriptors show greater density in the lexicon. Specifically, I explored whether trait terms that have greater relational impact (i.e., more greatly influence how others respond to a person) have more synonyms, are more frequently used, and are more strongly correlated with other trait terms. I found little evidence to suggest that trait terms rated as having greater relational impact were more frequently used or had more synonyms. However, these terms correlated more strongly with other trait terms in the set. Conversely, a trait term's loadings on structural factors (e.g., the Big Five, HEXACO) were extremely good predictors of the term's relational impact. The findings suggest that the lexical hypothesis may not be strongly supported in some ways it is commonly understood but is supported in the manner most important to investigations of trait structure. Specifically, trait terms with greater relational impact tend to more strongly correlate with other terms in lexical sets and thus have a greater role in driving the location of factors in analyses of trait structure. Implications for understanding the meaning of lexical factors such as the Big Five are discussed. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  7. A test of the permanent income hypothesis on Czech voucher privatization

    Czech Academy of Sciences Publication Activity Database

    Hanousek, Jan; Tůma, Z.

    2002-01-01

    Roč. 10, č. 2 (2002), s. 235-254 ISSN 0967-0750 Institutional research plan: CEZ:AV0Z7085904 Keywords : Barro-Riccardian * equivalence * permanent income hypothesis Subject RIV: AH - Economics Impact factor: 0.897, year: 2002 http://search. ebscohost .com/login.aspx?direct=true&db=bth&AN=6844845&site=ehost-live

  8. Wagner’s law versus Keynesian hypothesis: Evidence from pre-WWII Greece

    Directory of Open Access Journals (Sweden)

    Antonis Antoniou

    2013-01-01

    Full Text Available With data of over a century, 1833-1938, this paper attempts, for the first time, to analyze the causal relationship between income and government spending in the Greek economy for such a long period; that is, to gain some insight into Wagner and Keynesian Hypotheses. The time period of the analysis represents a period of growth, industrialization and modernization of the economy, conditions which are conducive to Wagner’s Law but also to the Keynesian Hypothesis. The empirical analysis resorts to Autoregressive Distributed Lag (ARDL Cointegration method and tests for the presence of possible structural breaks. The results reveal a positive and statistically significant long run causal effect running from economic performance towards the public size giving support to Wagner’s Law in Greece, whereas for the Keynesian hypothesis some doubts arise for specific time sub-periods.

  9. Testing of January Anomaly at ISE-100 Index with Power Ratio Method

    Directory of Open Access Journals (Sweden)

    Şule Yüksel Yiğiter

    2015-12-01

    Full Text Available AbstractNone of investors that can access all informations in the same ratio is not possible to earn higher returns according to Efficient Market Hypothesis. However, it has been set forth effect of time on returns in several studies and reached conflicting conclusions with hypothesis. In this context, one of the most important existing anomalies is also January month anomaly. In this study, it has been researched that if there is  January effect in BIST-100 index covering 2008-2014 period by using power ratio method. The presence of January month anomaly in BIST-100 index within specified period determined by analysis results.Keywords: Efficient Markets Hypothesis, January Month Anomaly, Power Ratio MethodJEL Classification Codes: G1,C22

  10. The Efficient Market Hypothesis: Is It Applicable to the Foreign Exchange Market?

    OpenAIRE

    Nguyen, James

    2004-01-01

    The study analyses the applicability of the efficient market hypothesis to the foreign exchange market by testing the profitability of the filter rule on the spot market. The significance of the returns was validated by comparing them to the returns from randomly generated shuffled series via bootstrap methods. The results were surprising. For the total period (1984-2003) small filter rules could deliver significant returns indicating an inefficient foreign exchange market. However, once the ...

  11. Quantitative methods in ethnobotany and ethnopharmacology: considering the overall flora--hypothesis testing for over- and underused plant families with the Bayesian approach.

    Science.gov (United States)

    Weckerle, Caroline S; Cabras, Stefano; Castellanos, Maria Eugenia; Leonti, Marco

    2011-09-01

    We introduce and explain the advantages of the Bayesian approach and exemplify the method with an analysis of the medicinal flora of Campania, Italy. The Bayesian approach is a new method, which allows to compare medicinal floras with the overall flora of a given area and to investigate over- and underused plant families. In contrast to previously used methods (regression analysis and binomial method) it considers the inherent uncertainty around the analyzed data. The medicinal flora with 423 species was compiled based on nine studies on local medicinal plant use in Campania. The total flora comprises 2237 species belonging to 128 families. Statistical analysis was performed with the Bayesian method and the binomial method. An approximated χ(2)-test was used to analyze the relationship between use categories and higher taxonomic groups. Among the larger plant families we find the Lamiaceae, Rosaceae, and Malvaceae, to be overused in the local medicine of Campania and the Orchidaceae, Caryophyllaceae, Poaceae, and Fabaceae to be underused compared to the overall flora. Furthermore, do specific medicinal uses tend to be correlated with taxonomic plant groups. For example, are the Monocots heavily used for urological complaints. Testing for over- and underused taxonomic groups of a flora with the Bayesian method is easy to adopt and can readily be calculated in excel spreadsheets using the excel function Inverse beta (INV.BETA). In contrast to the binomial method the presented method is also suitable for small datasets. With larger datasets the two methods tend to converge. However, results are generally more conservative with the Bayesian method pointing out fewer families as over- or underused. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. TESTING THE EFFICIENT MARKET HYPOTHESIS ON THE ROMANIAN CAPITAL MARKET

    OpenAIRE

    Daniel Stefan ARMEANU; Sorin-Iulian CIOACA

    2014-01-01

    The Efficient Market Hypothesis (EMH) is one of the leading financial concepts that dominated the economic research over the last 50 years, being one of the pillars of the modern economic science. This theory, developed by Eugene Fama in the `70s, was a landmark in the development of theoretical concepts and models trying to explain the price evolution of financial assets (considering the common assumptions of the main developed theories) and also for the development of some branches in the f...

  13. Sexual selection on land snail shell ornamentation: a hypothesis that may explain shell diversity

    Directory of Open Access Journals (Sweden)

    Schilthuizen Menno

    2003-06-01

    Full Text Available Abstract Background Many groups of land snails show great interspecific diversity in shell ornamentation, which may include spines on the shell and flanges on the aperture. Such structures have been explained as camouflage or defence, but the possibility that they might be under sexual selection has not previously been explored. Presentation of the hypothesis The hypothesis that is presented consists of two parts. First, that shell ornamentation is the result of sexual selection. Second, that such sexual selection has caused the divergence in shell shape in different species. Testing the hypothesis The first part of the hypothesis may be tested by searching for sexual dimorphism in shell ornamentation in gonochoristic snails, by searching for increased variance in shell ornamentation relative to other shell traits, and by mate choice experiments using individuals with experimentally enhanced ornamentation. The second part of the hypothesis may be tested by comparing sister groups and correlating shell diversity with degree of polygamy. Implications of the hypothesis If the hypothesis were true, it would provide an explanation for the many cases of allopatric evolutionary radiation in snails, where shell diversity cannot be related to any niche differentiation or environmental differences.

  14. Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.

    Science.gov (United States)

    Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen

    2017-12-01

    In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.

  15. Experimental test of the PCAC-hypothesis in charged current neutrino and antineutrino interactions on protons

    Science.gov (United States)

    Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; O'Neale, S. W.; Klein, H.; Morrison, D. R. O.; Schmid, P.; Wachsmuth, H.; Miller, D. B.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hoffmann, E.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Allport, P.; Myatt, G.; Radojicic, D.; Bullock, F. W.; Burke, S.

    1987-03-01

    Data obtained with the bubble chamber BEBC at CERN are used for the first significant test of Adler's prediction for the neutrino and antineutrino-proton scattering cross sections at vanishing four-momentum transfer squared Q 2. An Extended Vector Meson Dominance Model (EVDM) is applied to extrapolate Adler's prediction to experimentally accessible values of Q 2. The data show good agreement with Adler's prediction for Q 2→0 thus confirming the PCAC hypothesis in the kinematical region of high leptonic energy transfer ν>2 GeV. The good agreement of the data with the theoretical predictions also at higher Q 2, where the EVDM terms are dominant, also supports this model. However, an EVDM calculation without PCAC is clearly ruled out by the data.

  16. Experimental test of the PCAC-hypothesis in charged current neutrino and antineutrino interactions on protons

    International Nuclear Information System (INIS)

    Jones, G.T.; Jones, R.W.L.; Kennedy, B.W.; O'Neale, S.W.; Klein, H.; Morrison, D.R.O.; Schmid, P.; Wachsmuth, H.; Allport, P.; Myatt, G.; Radojicic, D.; Bullock, F.W.; Burke, S.

    1987-01-01

    Data obtained with the bubble chamber BEBC at CERN are used for the first significant test of Adler's prediction for the neutrino and antineutrino-proton scattering cross sections at vanishing four-momentum transfer squared Q 2 . An Extended Vector Meson Dominance Model (EVDM) is applied to extrapolate Adler's prediction to experimentally accessible values of Q 2 . The data show good agreement with Adler's prediction for Q 2 → 0 thus confirming the PCAC hypothesis in the kinematical region of high leptonic energy transfer ν > 2 GeV. The good agreement of the data with the theoretical predictions also at higher Q 2 , where the EVDM terms are dominant, also supports this model. However, an EVDM calculation without PCAC is clearly ruled out by the data. (orig.)

  17. Helminth community structure and diet of three Afrotropical anuran species: a test of the interactive-versus-isolationist parasite communities hypothesis

    Directory of Open Access Journals (Sweden)

    G. C. Akani

    2011-09-01

    Full Text Available The interactive-versus-isolationist hypothesis predicts that parasite communities should be depauperated and weakly structured by interspecific competition in amphibians. A parasitological survey was carried out to test this hypothesis using three anuran species from Nigeria, tropical Africa (one Bufonidae; two Ranidae. High values of parasite infection parameters were found in all three species, which were infected by nematodes, cestodes and trematodes. Nonetheless, the parasite communities of the three anurans were very depauperated in terms of number of species (4 to 6. Interspecific competition was irrelevant in all species, as revealed by null models and Monte Carlo permutations. Cluster analyses revealed that, in terms of parasite community composition, the two Ranidae were similar, whereas the Bufonidae was more different. However, when prevalence, intensity, and abundance of parasites are combined into a multivariate analysis, each anuran species was clearly spaced apart from the others, thus revealing considerable species-specific differences in terms of their parasite communities. All anurans were generalists and probably opportunistic in terms of dietary habits, and showed no evidence of interspecific competition for food. Overall, our data are widely consistent with expectations driven from the interactive-versus-isolationist parasite communities hypothesis.

  18. Isotopic method of testing the dynamics of melt flow through a sedimentation tank

    International Nuclear Information System (INIS)

    Bazaniak, Z.; Chamer, R.; Stec, J.; Przybytniak, W.

    1981-01-01

    The isotopic method of a simultaneous measurement of copper matte and slag flow parameters is discussed. For marking Cu-64 and Zr 95/97, isotopes characterized by various gamma radiation energy are used. The chemical form of copper and zirconium compounds was chosen from the viewpoint of assuring a selective solubility in the tested phases. To interpret the results of isotopic tests, the Wolf-Resnick model was made. The obtained results have confirmed the hypothesis of a possible occurrence of the copper matte flotation effect. In order to reduce of copper uplifted with the shaft slag, a redesigning is suggested of the sedimentation tank that would assure a reduction of the ideal mixing participation and an increase of the zone characterized by the piston flow. (author)

  19. Testing in mice the hypothesis that melanin is protective in malaria infections.

    Directory of Open Access Journals (Sweden)

    Michael Waisberg

    Full Text Available Malaria has had the largest impact of any infectious disease on shaping the human genome, exerting enormous selective pressure on genes that improve survival in severe malaria infections. Modern humans originated in Africa and lost skin melanization as they migrated to temperate regions of the globe. Although it is well documented that loss of melanization improved cutaneous Vitamin D synthesis, melanin plays an evolutionary ancient role in insect immunity to malaria and in some instances melanin has been implicated to play an immunoregulatory role in vertebrates. Thus, we tested the hypothesis that melanization may be protective in malaria infections using mouse models. Congenic C57BL/6 mice that differed only in the gene encoding tyrosinase, a key enzyme in the synthesis of melanin, showed no difference in the clinical course of infection by Plasmodium yoelii 17XL, that causes severe anemia, Plasmodium berghei ANKA, that causes severe cerebral malaria or Plasmodium chabaudi AS that causes uncomplicated chronic disease. Moreover, neither genetic deficiencies in vitamin D synthesis nor vitamin D supplementation had an effect on survival in cerebral malaria. Taken together, these results indicate that neither melanin nor vitamin D production improve survival in severe malaria.

  20. Brain morphology of the threespine stickleback (Gasterosteus aculeatus) varies inconsistently with respect to habitat complexity: A test of the Clever Foraging Hypothesis.

    Science.gov (United States)

    Ahmed, Newaz I; Thompson, Cole; Bolnick, Daniel I; Stuart, Yoel E

    2017-05-01

    The Clever Foraging Hypothesis asserts that organisms living in a more spatially complex environment will have a greater neurological capacity for cognitive processes related to spatial memory, navigation, and foraging. Because the telencephalon is often associated with spatial memory and navigation tasks, this hypothesis predicts a positive association between telencephalon size and environmental complexity. The association between habitat complexity and brain size has been supported by comparative studies across multiple species but has not been widely studied at the within-species level. We tested for covariation between environmental complexity and neuroanatomy of threespine stickleback ( Gasterosteus aculeatus ) collected from 15 pairs of lakes and their parapatric streams on Vancouver Island. In most pairs, neuroanatomy differed between the adjoining lake and stream populations. However, the magnitude and direction of this difference were inconsistent between watersheds and did not covary strongly with measures of within-site environmental heterogeneity. Overall, we find weak support for the Clever Foraging Hypothesis in our study.

  1. [Training in iterative hypothesis testing as part of psychiatric education. A randomized study].

    Science.gov (United States)

    Lampen-Imkamp, S; Alte, C; Sipos, V; Kordon, A; Hohagen, F; Schweiger, U; Kahl, K G

    2012-01-01

    The improvement of medical education is at the center of efforts to reform the studies of medicine. Furthermore, an excellent teaching program for students is a quality feature of medical universities. Besides teaching of disease-specific contents, the acquisition of interpersonal and decision-making skills is important. However, the cognitive style of senior physicians leading to a diagnosis cannot easily be taught. Therefore, the following study aimed at examining whether specific training in iterative hypothesis testing (IHT) may improve the correctness of the diagnostic process. Seventy-one medical students in their 9th-11th terms were randomized to medical teaching as usual or to IHT training for 4 weeks. The intervention group received specific training according to the method of IHT. All students were examined by a multiple choice (MC) exam and additionally by simulated patients (SP). The SPs were instructed to represent either a patient with depression and comorbid anxiety and substance use disorder (SP1) or to represent a patient with depression, obsessive-compulsive disorder and acute suicidal tendencies (SP2). All students identified the diagnosis of major depression in the SPs, but IHT-trained students recognized more diagnostic criteria. Furthermore, IHT-trained students recognized acute suicide tendencies in SP2 more often and identified more comorbid psychiatric disorders. The results of the MC exam were comparable in both groups. An analysis of the satisfaction with the different training programs revealed that the IHT training received a better appraisal. Our results point to the role of IHT in teaching diagnostic skills. However, the results of the MC exam were not influenced by IHT training. Furthermore, our results show that students are in need of training in practical clinical skills.

  2. The challenge of conceptual stretching in multi-method research

    OpenAIRE

    Ahram, Ariel

    2009-01-01

    Multi-method research (MMR) has gained enthusiastic support among political scientists in recent years. Much of the impetus for MMR has been based on the seemingly intuitive logic of convergent triangulation: two tests are better than one, since a hypothesis that had survived a series of tests with different methods would be regarded as more valid than a hypothesis tested only a single method. In their seminal Design-ing Social Inquiry, King, Keohane, and Verba (1994) argue that combining qu...

  3. The Cognitive Mediation Hypothesis Revisited: An Empirical Response to Methodological and Theoretical Criticism.

    Science.gov (United States)

    Romero, Anna A.; And Others

    1996-01-01

    In order to address criticisms raised against the cognitive mediation hypothesis, three experiments were conducted to develop a more direct test of the hypothesis. Taken together, the three experiments provide converging support for the cognitive mediation hypothesis, reconfirming the central role of cognition in the persuasion process.…

  4. Tracing the footsteps of Sherlock Holmes: cognitive representations of hypothesis testing.

    Science.gov (United States)

    Van Wallendael, L R; Hastie, R

    1990-05-01

    A well-documented phenomenon in opinion-revision literature is subjects' failure to revise probability estimates for an exhaustive set of mutually exclusive hypotheses in a complementary manner. However, prior research has not addressed the question of whether such behavior simply represents a misunderstanding of mathematical rules, or whether it is a consequence of a cognitive representation of hypotheses that is at odds with the Bayesian notion of a set relationship. Two alternatives to the Bayesian representation, a belief system (Shafer, 1976) and a system of independent hypotheses, were proposed, and three experiments were conducted to examine cognitive representations of hypothesis sets in the testing of multiple competing hypotheses. Subjects were given brief murder mysteries to solve and allowed to request various types of information about the suspects; after having received each new piece of information, subjects rated each suspect's probability of being the murderer. Presence and timing of suspect eliminations were varied in the first two experiments; the final experiment involved the varying of percentages of clues that referred to more than one suspect (for example, all of the female suspects). The noncomplementarity of opinion revisions remained a strong phenomenon in all conditions. Information-search data refuted the idea that subjects represented hypotheses as a Bayesian set; further study of the independent hypotheses theory and Shaferian belief functions as descriptive models is encouraged.

  5. Tests of the salt-nuclei hypothesis of rain formation

    Energy Technology Data Exchange (ETDEWEB)

    Woodcock, A H; Blanchard, D C

    1955-01-01

    Atmospheric chlorides in sea-salt nuclei and the chlorides dissolved in shower rainwaters were recently measured in Hawaii. A comparison of these measurements reveals the remarkable fact that the weight of chloride present in a certain number of nuclei in a cubic meter of clear air tends to be equal to the weight of chloride dissolved in an equal number of raindrops in a cubic meter of rainy air. This result is explained as an indication that the raindrops grow on the salt nuclei in some manner which prevents a marked change in the distribution of these nuclei during the drop-growth process. The data presented add new evidence in further support of the salt-nuclei raindrop hypothesis previously proposed by the first author.

  6. Recent tests of the equilibrium-point hypothesis (lambda model).

    Science.gov (United States)

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  7. The Twin Deficits Hypothesis: An Empirical Analysis for Tanzania

    Directory of Open Access Journals (Sweden)

    Manamba Epaphra

    2017-09-01

    Full Text Available This paper examines the relationship between current account and government budget deficits in Tanzania. The paper tests the validity of the twin deficits hypothesis, using annual time series data for the 1966-2015 period. The paper is thought to be significant because the concept of the twin deficit hypothesis is fraught with controversy. Some researches support the hypothesis that there is a positive relationship between current account deficits and fiscal deficits in the economy while others do not. In this paper, the empirical tests fail to reject the twin deficits hypothesis, indicating that rising budget deficits put more strain on the current account deficits in Tanzania. Specifically, the Vector Error Correction Model results support the conventional theory of a positive relationship between fiscal and external balances, with a relatively high speed of adjustment toward the equilibrium position. This evidence is consistent with a small open economy. To address the problem that may result from this kind of relationship, appropriate policy variables for reducing budget deficits such as reduction in non-development expenditure, enhancement of domestic revenue collection and actively fight corruption and tax evasion should be adopted. The government should also target export oriented firms and encourage an import substitution industry by creating favorable business environments.

  8. Sex differences in DNA methylation and expression in zebrafish brain: a test of an extended 'male sex drive' hypothesis.

    Science.gov (United States)

    Chatterjee, Aniruddha; Lagisz, Malgorzata; Rodger, Euan J; Zhen, Li; Stockwell, Peter A; Duncan, Elizabeth J; Horsfield, Julia A; Jeyakani, Justin; Mathavan, Sinnakaruppan; Ozaki, Yuichi; Nakagawa, Shinichi

    2016-09-30

    The sex drive hypothesis predicts that stronger selection on male traits has resulted in masculinization of the genome. Here we test whether such masculinizing effects can be detected at the level of the transcriptome and methylome in the adult zebrafish brain. Although methylation is globally similar, we identified 914 specific differentially methylated CpGs (DMCs) between males and females (435 were hypermethylated and 479 were hypomethylated in males compared to females). These DMCs were prevalent in gene body, intergenic regions and CpG island shores. We also discovered 15 distinct CpG clusters with striking sex-specific DNA methylation differences. In contrast, at transcriptome level, more female-biased genes than male-biased genes were expressed, giving little support for the male sex drive hypothesis. Our study provides genome-wide methylome and transcriptome assessment and sheds light on sex-specific epigenetic patterns and in zebrafish for the first time. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Learning-Related Changes in Adolescents' Neural Networks during Hypothesis-Generating and Hypothesis-Understanding Training

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yongju

    2012-01-01

    Fourteen science high school students participated in this study, which investigated neural-network plasticity associated with hypothesis-generating and hypothesis-understanding in learning. The students were divided into two groups and participated in either hypothesis-generating or hypothesis-understanding type learning programs, which were…

  10. Humans have evolved specialized skills of social cognition: the cultural intelligence hypothesis.

    Science.gov (United States)

    Herrmann, Esther; Call, Josep; Hernàndez-Lloreda, Maráa Victoria; Hare, Brian; Tomasello, Michael

    2007-09-07

    Humans have many cognitive skills not possessed by their nearest primate relatives. The cultural intelligence hypothesis argues that this is mainly due to a species-specific set of social-cognitive skills, emerging early in ontogeny, for participating and exchanging knowledge in cultural groups. We tested this hypothesis by giving a comprehensive battery of cognitive tests to large numbers of two of humans' closest primate relatives, chimpanzees and orangutans, as well as to 2.5-year-old human children before literacy and schooling. Supporting the cultural intelligence hypothesis and contradicting the hypothesis that humans simply have more "general intelligence," we found that the children and chimpanzees had very similar cognitive skills for dealing with the physical world but that the children had more sophisticated cognitive skills than either of the ape species for dealing with the social world.

  11. A General Relativistic Null Hypothesis Test with Event Horizon Telescope Observations of the Black Hole Shadow in Sgr A*

    Science.gov (United States)

    Psaltis, Dimitrios; Özel, Feryal; Chan, Chi-Kwan; Marrone, Daniel P.

    2015-12-01

    The half opening angle of a Kerr black hole shadow is always equal to (5 ± 0.2)GM/Dc2, where M is the mass of the black hole and D is its distance from the Earth. Therefore, measuring the size of a shadow and verifying whether it is within this 4% range constitutes a null hypothesis test of general relativity. We show that the black hole in the center of the Milky Way, Sgr A*, is the optimal target for performing this test with upcoming observations using the Event Horizon Telescope (EHT). We use the results of optical/IR monitoring of stellar orbits to show that the mass-to-distance ratio for Sgr A* is already known to an accuracy of ∼4%. We investigate our prior knowledge of the properties of the scattering screen between Sgr A* and the Earth, the effects of which will need to be corrected for in order for the black hole shadow to appear sharp against the background emission. Finally, we explore an edge detection scheme for interferometric data and a pattern matching algorithm based on the Hough/Radon transform and demonstrate that the shadow of the black hole at 1.3 mm can be localized, in principle, to within ∼9%. All these results suggest that our prior knowledge of the properties of the black hole, of scattering broadening, and of the accretion flow can only limit this general relativistic null hypothesis test with EHT observations of Sgr A* to ≲10%.

  12. Brood desertion by female shorebirds : a test of the differential parental capacity hypothesis on Kentish plovers

    NARCIS (Netherlands)

    Amat, JA; Visser, GH; Perez-Hurtado, A; Arroyo, GM

    2000-01-01

    The aim of this study was to examine whether the energetic costs of reproduction explain offspring desertion by female shorebirds, as is suggested by the differential parental capacity hypothesis. A prediction of the hypothesis is that, in species with biparental incubation in which females desert

  13. SETI in vivo: testing the we-are-them hypothesis

    Science.gov (United States)

    Makukov, Maxim A.; Shcherbak, Vladimir I.

    2018-04-01

    After it was proposed that life on Earth might descend from seeding by an earlier extraterrestrial civilization motivated to secure and spread life, some authors noted that this alternative offers a testable implication: microbial seeds could be intentionally supplied with a durable signature that might be found in extant organisms. In particular, it was suggested that the optimal location for such an artefact is the genetic code, as the least evolving part of cells. However, as the mainstream view goes, this scenario is too speculative and cannot be meaningfully tested because encoding/decoding a signature within the genetic code is something ill-defined, so any retrieval attempt is doomed to guesswork. Here we refresh the seeded-Earth hypothesis in light of recent observations, and discuss the motivation for inserting a signature. We then show that `biological SETI' involves even weaker assumptions than traditional SETI and admits a well-defined methodological framework. After assessing the possibility in terms of molecular and evolutionary biology, we formalize the approach and, adopting the standard guideline of SETI that encoding/decoding should follow from first principles and be convention-free, develop a universal retrieval strategy. Applied to the canonical genetic code, it reveals a non-trivial precision structure of interlocked logical and numerical attributes of systematic character (previously we found these heuristically). To assess this result in view of the initial assumption, we perform statistical, comparison, interdependence and semiotic analyses. Statistical analysis reveals no causal connection of the result to evolutionary models of the genetic code, interdependence analysis precludes overinterpretation, and comparison analysis shows that known variations of the code lack any precision-logic structures, in agreement with these variations being post-LUCA (i.e. post-seeding) evolutionary deviations from the canonical code. Finally, semiotic

  14. Multiple Choice Testing and the Retrieval Hypothesis of the Testing Effect

    Science.gov (United States)

    Sensenig, Amanda E.

    2010-01-01

    Taking a test often leads to enhanced later memory for the tested information, a phenomenon known as the "testing effect". This memory advantage has been reliably demonstrated with recall tests but not multiple choice tests. One potential explanation for this finding is that multiple choice tests do not rely on retrieval processes to the same…

  15. The Dopamine Imbalance Hypothesis of Fatigue in Multiple Sclerosis and Other Neurological Disorders.

    Directory of Open Access Journals (Sweden)

    Ekaterina eDobryakova

    2015-03-01

    Full Text Available Fatigue is one of the most pervasive symptoms of multiple sclerosis (MS, and has engendered hundreds of investigations on the topic. While there is a growing literature using various methods to study fatigue, a unified theory of fatigue in MS is yet to emerge. In the current review, we synthesize findings from neuroimaging, pharmacological, neuropsychological and immunological studies of fatigue in MS, which point to a specific hypothesis of fatigue in MS: the dopamine imbalance hypothesis. The communication between the striatum and prefrontal cortex is reliant on dopamine, a modulatory neurotransmitter. Neuroimaging findings suggest that fatigue results from the disruption of communication between these regions. Supporting the dopamine imbalance hypothesis, structural and functional neuroimaging studies show abnormalities in the frontal and striatal regions that are heavily innervated by dopamine neurons. Further, dopaminergic psychostimulant medication has been shown to alleviate fatigue in individuals with traumatic brain injury, chronic fatigue syndrome and in cancer patients, also indicating that dopamine might play an important role in fatigue perception. This paper reviews the structural and functional neuroimaging evidence as well as pharmacological studies that suggest that dopamine plays a critical role in the phenomenon of fatigue. We conclude with how specific aspects of the dopamine imbalance hypothesis can be tested in future research.

  16. A method to implement the reservoir-wave hypothesis using phase-contrast magnetic resonance imaging

    OpenAIRE

    Gray, Robert D.M.; Parker, Kim H.; Quail, Michael A.; Taylor, Andrew M.; Biglino, Giovanni

    2016-01-01

    The reservoir-wave hypothesis states that the blood pressure waveform can be usefully divided into a “reservoir pressure” related to the global compliance and resistance of the arterial system, and an “excess pressure” that depends on local conditions. The formulation of the reservoir-wave hypothesis applied to the area waveform is shown, and the analysis is applied to area and velocity data from high-resolution phase-contrast cardiovascular magnetic resonance (CMR) imaging. A validation stud...

  17. Social learning and evolution: the cultural intelligence hypothesis

    Science.gov (United States)

    van Schaik, Carel P.; Burkart, Judith M.

    2011-01-01

    If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer. PMID:21357223

  18. Social learning and evolution: the cultural intelligence hypothesis.

    Science.gov (United States)

    van Schaik, Carel P; Burkart, Judith M

    2011-04-12

    If social learning is more efficient than independent individual exploration, animals should learn vital cultural skills exclusively, and routine skills faster, through social learning, provided they actually use social learning preferentially. Animals with opportunities for social learning indeed do so. Moreover, more frequent opportunities for social learning should boost an individual's repertoire of learned skills. This prediction is confirmed by comparisons among wild great ape populations and by social deprivation and enculturation experiments. These findings shaped the cultural intelligence hypothesis, which complements the traditional benefit hypotheses for the evolution of intelligence by specifying the conditions in which these benefits can be reaped. The evolutionary version of the hypothesis argues that species with frequent opportunities for social learning should more readily respond to selection for a greater number of learned skills. Because improved social learning also improves asocial learning, the hypothesis predicts a positive interspecific correlation between social-learning performance and individual learning ability. Variation among primates supports this prediction. The hypothesis also predicts that more heavily cultural species should be more intelligent. Preliminary tests involving birds and mammals support this prediction too. The cultural intelligence hypothesis can also account for the unusual cognitive abilities of humans, as well as our unique mechanisms of skill transfer.

  19. An Efficient Implementation of Track-Oriented Multiple Hypothesis Tracker Using Graphical Model Approaches

    Directory of Open Access Journals (Sweden)

    Jinping Sun

    2017-01-01

    Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.

  20. Physiopathological Hypothesis of Cellulite

    Science.gov (United States)

    de Godoy, José Maria Pereira; de Godoy, Maria de Fátima Guerreiro

    2009-01-01

    A series of questions are asked concerning this condition including as regards to its name, the consensus about the histopathological findings, physiological hypothesis and treatment of the disease. We established a hypothesis for cellulite and confirmed that the clinical response is compatible with this hypothesis. Hence this novel approach brings a modern physiological concept with physiopathologic basis and clinical proof of the hypothesis. We emphasize that the choice of patient, correct diagnosis of cellulite and the technique employed are fundamental to success. PMID:19756187

  1. Studies on Hepa filter test methods

    International Nuclear Information System (INIS)

    Lee, S.H.; Jon, K.S.; Park, W.J.; Ryoo, R.

    1981-01-01

    The purpose of this study is to compare testing methods of the HEPA filter adopted in other countries with each other, and to design and construct a test duct system to establish testing methods. The American D.O.P. test method, the British NaCl test method and several other independently developed methods are compared. It is considered that the D.O.P. method is most suitable for in-plant and leak tests

  2. A hypothesis-testing framework for studies investigating ontogenetic niche shifts using stable isotope ratios.

    Directory of Open Access Journals (Sweden)

    Caroline M Hammerschlag-Peyer

    Full Text Available Ontogenetic niche shifts occur across diverse taxonomic groups, and can have critical implications for population dynamics, community structure, and ecosystem function. In this study, we provide a hypothesis-testing framework combining univariate and multivariate analyses to examine ontogenetic niche shifts using stable isotope ratios. This framework is based on three distinct ontogenetic niche shift scenarios, i.e., (1 no niche shift, (2 niche expansion/reduction, and (3 discrete niche shift between size classes. We developed criteria for identifying each scenario, as based on three important resource use characteristics, i.e., niche width, niche position, and niche overlap. We provide an empirical example for each ontogenetic niche shift scenario, illustrating differences in resource use characteristics among different organisms. The present framework provides a foundation for future studies on ontogenetic niche shifts, and also can be applied to examine resource variability among other population sub-groupings (e.g., by sex or phenotype.

  3. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  4. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Directory of Open Access Journals (Sweden)

    Ke Li

    2016-01-01

    Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  5. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  6. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-01-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results are presented: the radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach test methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transporation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance

  7. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-11-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results is presented: The radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach tests methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transportation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance. 2 references, 2 figures

  8. Semiconductor testing method

    International Nuclear Information System (INIS)

    Brown, Stephen.

    1992-01-01

    In a method of avoiding use of nuclear radiation, eg gamma rays, X-rays, electron beams, for testing semiconductor components for resistance to hard radiation, which hard radiation causes data corruption in some memory devices and 'latch-up' in others, similar fault effects can be achieved using a xenon or other 'light' flash gun even though the penetration of light is significantly less than that of gamma rays. The method involves treating a device with gamma radiation, measuring a particular fault current at the onset of a fault event, repeating the test with light to confirm the occurrence of the fault event at the same measured fault current, and using the fault current value as a reference for future tests using light on similar devices. (author)

  9. Rayleigh's hypothesis and the geometrical optics limit.

    Science.gov (United States)

    Elfouhaily, Tanos; Hahn, Thomas

    2006-09-22

    The Rayleigh hypothesis (RH) is often invoked in the theoretical and numerical treatment of rough surface scattering in order to decouple the analytical form of the scattered field. The hypothesis stipulates that the scattered field away from the surface can be extended down onto the rough surface even though it is formed by solely up-going waves. Traditionally this hypothesis is systematically used to derive the Volterra series under the small perturbation method which is equivalent to the low-frequency limit. In this Letter we demonstrate that the RH also carries the high-frequency or the geometrical optics limit, at least to first order. This finding has never been explicitly derived in the literature. Our result comforts the idea that the RH might be an exact solution under some constraints in the general case of random rough surfaces and not only in the case of small-slope deterministic periodic gratings.

  10. A Dopamine Hypothesis of Autism Spectrum Disorder.

    Science.gov (United States)

    Pavăl, Denis

    2017-01-01

    Autism spectrum disorder (ASD) comprises a group of neurodevelopmental disorders characterized by social deficits and stereotyped behaviors. While several theories have emerged, the pathogenesis of ASD remains unknown. Although studies report dopamine signaling abnormalities in autistic patients, a coherent dopamine hypothesis which could link neurobiology to behavior in ASD is currently lacking. In this paper, we present such a hypothesis by proposing that autistic behavior arises from dysfunctions in the midbrain dopaminergic system. We hypothesize that a dysfunction of the mesocorticolimbic circuit leads to social deficits, while a dysfunction of the nigrostriatal circuit leads to stereotyped behaviors. Furthermore, we discuss 2 key predictions of our hypothesis, with emphasis on clinical and therapeutic aspects. First, we argue that dopaminergic dysfunctions in the same circuits should associate with autistic-like behavior in nonautistic subjects. Concerning this, we discuss the case of PANDAS (pediatric autoimmune neuropsychiatric disorder associated with streptococcal infections) which displays behaviors similar to those of ASD, presumed to arise from dopaminergic dysfunctions. Second, we argue that providing dopamine modulators to autistic subjects should lead to a behavioral improvement. Regarding this, we present clinical studies of dopamine antagonists which seem to have improving effects on autistic behavior. Furthermore, we explore the means of testing our hypothesis by using neuroreceptor imaging, which could provide comprehensive evidence for dopamine signaling dysfunctions in autistic subjects. Lastly, we discuss the limitations of our hypothesis. Along these lines, we aim to provide a dopaminergic model of ASD which might lead to a better understanding of the ASD pathogenesis. © 2017 S. Karger AG, Basel.

  11. Possible Solution to Publication Bias Through Bayesian Statistics, Including Proper Null Hypothesis Testing

    NARCIS (Netherlands)

    Konijn, Elly A.; van de Schoot, Rens; Winter, Sonja D.; Ferguson, Christopher J.

    2015-01-01

    The present paper argues that an important cause of publication bias resides in traditional frequentist statistics forcing binary decisions. An alternative approach through Bayesian statistics provides various degrees of support for any hypothesis allowing balanced decisions and proper null

  12. Testing Bergmann's rule and the Rosenzweig hypothesis with craniometric studies of the South American sea lion.

    Science.gov (United States)

    Sepúlveda, Maritza; Oliva, Doris; Duran, L René; Urra, Alejandra; Pedraza, Susana N; Majluf, Patrícia; Goodall, Natalie; Crespo, Enrique A

    2013-04-01

    We tested the validity of Bergmann's rule and Rosenzweig's hypothesis through an analysis of the geographical variation of the skull size of Otaria flavescens along the entire distribution range of the species (except Brazil). We quantified the sizes of 606 adult South American sea lion skulls measured in seven localities of Peru, Chile, Uruguay, Argentina, and the Falkland/Malvinas Islands. Geographical and environmental variables included latitude, longitude, and monthly minimum, maximum, and mean air and ocean temperatures. We also included information on fish landings as a proxy for productivity. Males showed a positive relationship between condylobasal length (CBL) and latitude, and between CBL and the six temperature variables. By contrast, females showed a negative relationship between CBL and the same variables. Finally, female skull size showed a significant and positive correlation with fish landings, while males did not show any relationship with this variable. The body size of males conformed to Bergmann's rule, with larger individuals found in southern localities of South America. Females followed the converse of Bergmann's rule at the intraspecific level, but showed a positive relationship with the proxy for productivity, thus supporting Rosenzweig's hypothesis. Differences in the factors that drive body size in females and males may be explained by their different life-history strategies. Our analyses demonstrate that latitude and temperature are not the only factors that explain spatial variation in body size: others such as food availability are also important for explaining the ecogeographical patterns found in O. flavescens.

  13. Goodness-of-fit tests in mixed models

    KAUST Repository

    Claeskens, Gerda

    2009-05-12

    Mixed models, with both random and fixed effects, are most often estimated on the assumption that the random effects are normally distributed. In this paper we propose several formal tests of the hypothesis that the random effects and/or errors are normally distributed. Most of the proposed methods can be extended to generalized linear models where tests for non-normal distributions are of interest. Our tests are nonparametric in the sense that they are designed to detect virtually any alternative to normality. In case of rejection of the null hypothesis, the nonparametric estimation method that is used to construct a test provides an estimator of the alternative distribution. © 2009 Sociedad de Estadística e Investigación Operativa.

  14. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  15. Raison d’être of insulin resistance: the adjustable threshold hypothesis

    OpenAIRE

    Wang, Guanyu

    2014-01-01

    The epidemics of obesity and diabetes demand a deeper understanding of insulin resistance, for which the adjustable threshold hypothesis is formed in this paper. To test the hypothesis, mathematical modelling was used to analyse clinical data and to simulate biological processes at both molecular and organismal levels. I found that insulin resistance roots in the thresholds of the cell's bistable response. By assuming heterogeneity of the thresholds, single cells' all-or-none response can col...

  16. The picture superiority effect in conceptual implicit memory: a conceptual distinctiveness hypothesis.

    Science.gov (United States)

    Hamilton, Maryellen; Geraci, Lisa

    2006-01-01

    According to leading theories, the picture superiority effect is driven by conceptual processing, yet this effect has been difficult to obtain using conceptual implicit memory tests. We hypothesized that the picture superiority effect results from conceptual processing of a picture's distinctive features rather than a picture's semantic features. To test this hypothesis, we used 2 conceptual implicit general knowledge tests; one cued conceptually distinctive features (e.g., "What animal has large eyes?") and the other cued semantic features (e.g., "What animal is the figurehead of Tootsie Roll?"). Results showed a picture superiority effect only on the conceptual test using distinctive cues, supporting our hypothesis that this effect is mediated by conceptual processing of a picture's distinctive features.

  17. Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies

    Science.gov (United States)

    Harken, B.; Rubin, Y.

    2014-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one

  18. A Modified Generalized Fisher Method for Combining Probabilities from Dependent Tests

    Directory of Open Access Journals (Sweden)

    Hongying (Daisy eDai

    2014-02-01

    Full Text Available Rapid developments in molecular technology have yielded a large amount of high throughput genetic data to understand the mechanism for complex traits. The increase of genetic variants requires hundreds and thousands of statistical tests to be performed simultaneously in analysis, which poses a challenge to control the overall Type I error rate. Combining p-values from multiple hypothesis testing has shown promise for aggregating effects in high-dimensional genetic data analysis. Several p-value combining methods have been developed and applied to genetic data; see [Dai, et al. 2012b] for a comprehensive review. However, there is a lack of investigations conducted for dependent genetic data, especially for weighted p-value combining methods. Single nucleotide polymorphisms (SNPs are often correlated due to linkage disequilibrium. Other genetic data, including variants from next generation sequencing, gene expression levels measured by microarray, protein and DNA methylation data, etc. also contain complex correlation structures. Ignoring correlation structures among genetic variants may lead to severe inflation of Type I error rates for omnibus testing of p-values. In this work, we propose modifications to the Lancaster procedure by taking the correlation structure among p-values into account. The weight function in the Lancaster procedure allows meaningful biological information to be incorporated into the statistical analysis, which can increase the power of the statistical testing and/or remove the bias in the process. Extensive empirical assessments demonstrate that the modified Lancaster procedure largely reduces the Type I error rates due to correlation among p-values, and retains considerable power to detect signals among p-values. We applied our method to reassess published renal transplant data, and identified a novel association between B cell pathways and allograft tolerance.

  19. Cognitive differences between orang-utan species: a test of the cultural intelligence hypothesis.

    Science.gov (United States)

    Forss, Sofia I F; Willems, Erik; Call, Josep; van Schaik, Carel P

    2016-07-28

    Cultural species can - or even prefer to - learn their skills from conspecifics. According to the cultural intelligence hypothesis, selection on underlying mechanisms not only improves this social learning ability but also the asocial (individual) learning ability. Thus, species with systematically richer opportunities to socially acquire knowledge and skills should over time evolve to become more intelligent. We experimentally compared the problem-solving ability of Sumatran orang-utans (Pongo abelii), which are sociable in the wild, with that of the closely related, but more solitary Bornean orang-utans (P. pygmaeus), under the homogeneous environmental conditions provided by zoos. Our results revealed that Sumatrans showed superior innate problem-solving skills to Borneans, and also showed greater inhibition and a more cautious and less rough exploration style. This pattern is consistent with the cultural intelligence hypothesis, which predicts that the more sociable of two sister species experienced stronger selection on cognitive mechanisms underlying learning.

  20. FADTTSter: accelerating hypothesis testing with functional analysis of diffusion tensor tract statistics

    Science.gov (United States)

    Noel, Jean; Prieto, Juan C.; Styner, Martin

    2017-03-01

    Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.

  1. Testing the Binary Hypothesis: Pulsar Timing Constraints on Supermassive Black Hole Binary Candidates

    Science.gov (United States)

    Sesana, Alberto; Haiman, Zoltán; Kocsis, Bence; Kelley, Luke Zoltan

    2018-03-01

    The advent of time domain astronomy is revolutionizing our understanding of the universe. Programs such as the Catalina Real-time Transient Survey (CRTS) or the Palomar Transient Factory (PTF) surveyed millions of objects for several years, allowing variability studies on large statistical samples. The inspection of ≈250 k quasars in CRTS resulted in a catalog of 111 potentially periodic sources, put forward as supermassive black hole binary (SMBHB) candidates. A similar investigation on PTF data yielded 33 candidates from a sample of ≈35 k quasars. Working under the SMBHB hypothesis, we compute the implied SMBHB merger rate and we use it to construct the expected gravitational wave background (GWB) at nano-Hz frequencies, probed by pulsar timing arrays (PTAs). After correcting for incompleteness and assuming virial mass estimates, we find that the GWB implied by the CRTS sample exceeds the current most stringent PTA upper limits by almost an order of magnitude. After further correcting for the implicit bias in virial mass measurements, the implied GWB drops significantly but is still in tension with the most stringent PTA upper limits. Similar results hold for the PTF sample. Bayesian model selection shows that the null hypothesis (whereby the candidates are false positives) is preferred over the binary hypothesis at about 2.3σ and 3.6σ for the CRTS and PTF samples respectively. Although not decisive, our analysis highlights the potential of PTAs as astrophysical probes of individual SMBHB candidates and indicates that the CRTS and PTF samples are likely contaminated by several false positives.

  2. The evolution of polyandry: patterns of genotypic variation in female mating frequency, male fertilization success and a test of the sexy-sperm hypothesis.

    Science.gov (United States)

    Simmons, L W

    2003-07-01

    The sexy-sperm hypothesis predicts that females obtain indirect benefits for their offspring via polyandy, in the form of increased fertilization success for their sons. I use a quantitative genetic approach to test the sexy-sperm hypothesis using the field cricket Teleogryllus oceanicus. Previous studies of this species have shown considerable phenotypic variation in fertilization success when two or more males compete. There were high broad-sense heritabilities for both paternity and polyandry. Patterns of genotypic variance were consistent with X-linked inheritance and/or maternal effects on these traits. The genetic architecture therefore precludes the evolution of polyandry via a sexy-sperm process. Thus the positive genetic correlation between paternity in sons and polyandry in daughters predicted by the sexy-sperm hypothesis was absent. There was significant heritable variation in the investment by females in ovaries and by males in the accessory gland. Surprisingly there was a very strong genetic correlation between these two traits. The significance of this genetic correlation for the coevolution of male seminal products and polyandry is discussed.

  3. Methodology for developing new test methods

    Directory of Open Access Journals (Sweden)

    A. I. Korobko

    2017-06-01

    Full Text Available The paper describes the methodology for developing new test methods and forming solutions for the development of new test methods. The basis of the methodology for developing new test methods is the individual elements of the system and process approaches. They contribute to the development of an effective research strategy for the object, the study of interrelations, the synthesis of an adequate model of the test method. The effectiveness of the developed test method is determined by the correct choice of the set of concepts, their interrelations and mutual influence. This allows you to solve the tasks assigned to achieve the goal. The methodology is based on the use of fuzzy cognitive maps. The question of the choice of the method on the basis of which the model for the formation of solutions is based is considered. The methodology provides for recording a model for a new test method in the form of a finite set of objects. These objects are significant for the test method characteristics. Then a causal relationship is established between the objects. Further, the values of fitness indicators and the observability of the method and metrological tolerance for the indicator are established. The work is aimed at the overall goal of ensuring the quality of tests by improving the methodology for developing the test method.

  4. Possible stimulation of anti-tumor immunity using repeated cold stress: a hypothesis

    Directory of Open Access Journals (Sweden)

    Radoja Sasa

    2007-11-01

    Full Text Available Abstract Background The phenomenon of hormesis, whereby small amounts of seemingly harmful or stressful agents can be beneficial for the health and lifespan of laboratory animals has been reported in literature. In particular, there is accumulating evidence that daily brief cold stress can increase both numbers and activity of peripheral cytotoxic T lymphocytes and natural killer cells, the major effectors of adaptive and innate tumor immunity, respectively. This type of regimen (for 8 days has been shown to improve survival of mice infected with intracellular parasite Toxoplasma gondii, which would also be consistent with enhanced cell-mediated immunity. Presentation of the hypothesis This paper hypothesizes that brief cold-water stress repeated daily over many months could enhance anti-tumor immunity and improve survival rate of a non-lymphoid cancer. The possible mechanism of the non-specific stimulation of cellular immunity by repeated cold stress appears to involve transient activation of the sympathetic nervous system, hypothalamic-pituitary-adrenal and hypothalamic-pituitary-thyroid axes, as described in more detail in the text. Daily moderate cold hydrotherapy is known to reduce pain and does not appear to have noticeable adverse effects on normal test subjects, although some studies have shown that it can cause transient arrhythmias in patients with heart problems and can also inhibit humoral immunity. Sudden immersion in ice-cold water can cause transient pulmonary edema and increase permeability of the blood-brain barrier, thereby increasing mortality of neurovirulent infections. Testing the hypothesis The proposed procedure is an adapted cold swim (5–7 minutes at 20 degrees Celsius, includes gradual adaptation to be tested on a mouse tumor model. Mortality, tumor size, and measurements of cellular immunity (numbers and activity of peripheral CD8+ T lymphocytes and natural killer cells of the cold-exposed group would be compared to

  5. A comparator-hypothesis account of biased contingency detection.

    Science.gov (United States)

    Vadillo, Miguel A; Barberia, Itxaso

    2018-02-12

    Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. The Income Inequality Hypothesis Revisited : Assessing the Hypothesis Using Four Methodological Approaches

    NARCIS (Netherlands)

    Kragten, N.; Rözer, J.

    The income inequality hypothesis states that income inequality has a negative effect on individual’s health, partially because it reduces social trust. This article aims to critically assess the income inequality hypothesis by comparing several analytical strategies, namely OLS regression,

  7. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  8. Water developments and canids in two North American deserts: a test of the indirect effect of water hypothesis.

    Directory of Open Access Journals (Sweden)

    Lucas K Hall

    Full Text Available Anthropogenic modifications to landscapes intended to benefit wildlife may negatively influence wildlife communities. Anthropogenic provisioning of free water (water developments to enhance abundance and distribution of wildlife is a common management practice in arid regions where water is limiting. Despite the long-term and widespread use of water developments, little is known about how they influence native species. Water developments may negatively influence arid-adapted species (e.g., kit fox, Vulpes macrotis by enabling water-dependent competitors (e.g., coyote, Canis latrans to expand distribution in arid landscapes (i.e., indirect effect of water hypothesis. We tested the two predictions of the indirect effect of water hypothesis (i.e., coyotes will visit areas with free water more frequently and kit foxes will spatially and temporally avoid coyotes and evaluated relative use of free water by canids in the Great Basin and Mojave Deserts from 2010 to 2012. We established scent stations in areas with (wet and without (dry free water and monitored visitation by canids to these sites and visitation to water sources using infrared-triggered cameras. There was no difference in the proportions of visits to scent stations in wet or dry areas by coyotes or kit foxes at either study area. We did not detect spatial (no negative correlation between visits to scent stations or temporal (no difference between times when stations were visited segregation between coyotes and kit foxes. Visitation to water sources was not different for coyotes between study areas, but kit foxes visited water sources more in Mojave than Great Basin. Our results did not support the indirect effect of water hypothesis in the Great Basin or Mojave Deserts for these two canids.

  9. Conflict of interest between a nematode and a trematode in an amphipod host: Test of the "sabotage" hypothesis

    Science.gov (United States)

    Thomas, Frédéric; Fauchier, Jerome; Lafferty, Kevin D.

    2002-01-01

    Microphallus papillorobustus is a manipulative trematode that induces strong behavioural alterations in the gamaridean amphipod Gammarus insensibilis, making the amphipod more vulnerable to predation by aquatic birds (definitive hosts). Conversely, the sympatric nematodeGammarinema gammari uses Gammarus insensibilis as a habitat and a source of nutrition. We investigated the conflict of interest between these two parasite species by studying the consequences of mixed infection on amphipod behaviour associated with the trematode. In the field, some amphipods infected by the trematode did not display the altered behaviour. These normal amphipods also had more nematodes, suggesting that the nematode overpowered the manipulation of the trematode, a strategy that would prolong the nematode's life. We hypothesize that sabotage of the trematode by the nematode would be an adaptive strategy for the nematode consistent with recent speculation about co-operation and conflict in manipulative parasites. A behavioural test conducted in the laboratory from naturally infected amphipods yielded the same result. However, exposing amphipods to nematodes did not negate or decrease the manipulation exerted by the trematode. Similarly, experimental elimination of nematodes from amphipods did not permit trematodes to manipulate behaviour. These experimental data do not support the hypothesis that the negative association between nematodes and manipulation by the trematode is a result of the "sabotage" hypothesis.

  10. 49 CFR 383.133 - Testing methods.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Testing methods. 383.133 Section 383.133... STANDARDS; REQUIREMENTS AND PENALTIES Tests § 383.133 Testing methods. (a) All tests shall be constructed in... must be at least as stringent as the Federal standards. (c) States shall determine specific methods for...

  11. A test of the submentalizing hypothesis: Apes' performance in a false belief task inanimate control

    Science.gov (United States)

    Hirata, Satoshi; Call, Josep; Tomasello, Michael

    2017-01-01

    ABSTRACT Much debate concerns whether any nonhuman animals share with humans the ability to infer others' mental states, such as desires and beliefs. In a recent eye-tracking false-belief task, we showed that great apes correctly anticipated that a human actor would search for a goal object where he had last seen it, even though the apes themselves knew that it was no longer there. In response, Heyes proposed that apes' looking behavior was guided not by social cognitive mechanisms but rather domain-general cueing effects, and suggested the use of inanimate controls to test this alternative submentalizing hypothesis. In the present study, we implemented the suggested inanimate control of our previous false-belief task. Apes attended well to key events but showed markedly fewer anticipatory looks and no significant tendency to look to the correct location. We thus found no evidence that submentalizing was responsible for apes' anticipatory looks in our false-belief task. PMID:28919941

  12. Culture, Method, and the Content of Self-Concepts: Testing Trait, Individual-Self-Primacy, and Cultural Psychology Perspectives.

    Science.gov (United States)

    Del Prado, Alicia M; Church, A Timothy; Katigbak, Marcia S; Miramontes, Lilia G; Whitty, Monica; Curtis, Guy J; de Jesús Vargas-Flores, José; Ibáñez-Reyes, Joselina; Ortiz, Fernando A; Reyes, Jose Alberto S

    2007-12-01

    Three theoretical perspectives on cultural universals and differences in the content of self-concepts were tested in individualistic (United States, n = 178; Australia, n = 112) and collectivistic (Mexico, n = 157; Philippines, n = 138) cultures, using three methods of self-concept assessment. Support was found for both trait perspectives and the individual-self-primacy hypothesis. In contrast, support for cultural psychology hypotheses was limited because traits and other personal attributes were not more salient, or social attributes less salient, in individualistic cultures than collectivistic cultures. The salience of some aspects of self-concept depended on the method of assessment, calling into question conclusions based on monomethod studies.

  13. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  14. The Younger Dryas impact hypothesis: A requiem

    Science.gov (United States)

    Pinter, Nicholas; Scott, Andrew C.; Daulton, Tyrone L.; Podoll, Andrew; Koeberl, Christian; Anderson, R. Scott; Ishman, Scott E.

    2011-06-01

    consistent with the diffuse, non-catastrophic input of micrometeorite ablation fallout, probably augmented by anthropogenic and other terrestrial spherular grains. Results here also show considerable subjectivity in the reported sampling methods that may explain the purported YD spherule concentration peaks. Fire is a pervasive earth-surface process, and reanalyses of the original YD sites and of coeval records show episodic fire on the landscape through the latest Pleistocene, with no unique fire event at the onset of the YD. Lastly, with YD impact proponents increasingly retreating to nanodiamonds (cubic, hexagonal [lonsdaleite], and the proposed n-diamond) as evidence of impact, those data have been called into question. The presence of lonsdaleite was reported as proof of impact-related shock processes, but the evidence presented was inconsistent with lonsdaleite and consistent instead with polycrystalline aggregates of graphene and graphane mixtures that are ubiquitous in carbon forms isolated from sediments ranging from modern to pre-YD age. Important questions remain regarding the origins and distribution of other diamond forms (e.g., cubic nanodiamonds). In summary, none of the original YD impact signatures have been subsequently corroborated by independent tests. Of the 12 original lines of evidence, seven have so far proven to be non-reproducible. The remaining signatures instead seem to represent either (1) non-catastrophic mechanisms, and/or (2) terrestrial rather than extraterrestrial or impact-related sources. In all of these cases, sparse but ubiquitous materials seem to have been misreported and misinterpreted as singular peaks at the onset of the YD. Throughout the arc of this hypothesis, recognized and expected impact markers were not found, leading to proposed YD impactors and impact processes that were novel, self-contradictory, rapidly changing, and sometimes defying the laws of physics. The YD impact hypothesis provides a cautionary tale for researchers

  15. Approaches to vegetation mapping and ecophysiological hypothesis testing using combined information from TIMS, AVIRIS, and AIRSAR

    Science.gov (United States)

    Oren, R.; Vane, G.; Zimmermann, R.; Carrere, V.; Realmuto, V.; Zebker, Howard A.; Schoeneberger, P.; Schoeneberger, M.

    1991-01-01

    The Tropical Rainforest Ecology Experiment (TREE) had two primary objectives: (1) to design a method for mapping vegetation in tropical regions using remote sensing and determine whether the result improves on available vegetation maps; and (2) to test a specific hypothesis on plant/water relations. Both objectives were thought achievable with the combined information from the Thermal Infrared Multispectral Scanner (TIMS), Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), and Airborne Synthetic Aperture Radar (AIRSAR). Implicitly, two additional objectives were: (1) to ascertain that the range within each variable potentially measurable with the three instruments is large enough in the site, relative to the sensitivity of the instruments, so that differences between ecological groups may be detectable; and (2) to determine the ability of the three systems to quantify different variables and sensitivities. We found that the ranges in values of foliar nitrogen concentration, water availability, stand structure and species composition, and plant/water relations were large, even within the upland broadleaf vegetation type. The range was larger when other vegetation types were considered. Unfortunately, cloud cover and navigation errors compromised the utility of the TIMS and AVIRIS data. Nevertheless, the AIRSAR data alone appear to have improved on the available vegetation map for the study area. An example from an area converted to a farm is given to demonstrate how the combined information from AIRSAR, TIMS, and AVIRIS can uniquely identify distinct classes of land use. The example alludes to the potential utility of the three instruments for identifying vegetation at an ecological scale finer than vegetation types.

  16. Extraction of airway trees using multiple hypothesis tracking and template matching

    DEFF Research Database (Denmark)

    Raghavendra, Selvan; Petersen, Jens; Pedersen, Jesper Johannes Holst

    2016-01-01

    used in constructing a multiple hypothesis tree, which is then traversed to reach decisions. The proposed modifications remove the need for local thresholding of hypotheses as decisions are made entirely based on statistical comparisons involving the hypothesis tree. The results show improvements......Knowledge of airway tree morphology has important clinical applications in diagnosis of chronic obstructive pulmonary disease. We present an automatic tree extraction method based on multiple hypothesis tracking and template matching for this purpose and evaluate its performance on chest CT images...

  17. 40 CFR 63.547 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Test methods. 63.547 Section 63.547... Hazardous Air Pollutants from Secondary Lead Smelting § 63.547 Test methods. (a) The following test methods...), and 63.545(e): (1) Method 1 shall be used to select the sampling port location and the number of...

  18. Materials and test methods

    International Nuclear Information System (INIS)

    Kase, M.B.

    1985-01-01

    The objective of this study was to provide, in cooperation with ORNL and LANL, specimens required for studies to develop organic insulators having the cryogenic neutron irradiation resistance required for MFE systems utilizing superconducting magnetic confinement. To develop test methods and analytical procedures for assessing radiation damage. To stimulate and participate in international cooperation directed toward accomplishing these objectives. The system for producing uniaxially reinforced, 3-4 mm (0.125 in) diameter rod specimens has been refined and validated by production of excellent quality specimens using liquid-mix epoxy resin systems. The methodology is undergoing further modification to permit use of hot-melt epoxy and polyimide resin systems as will be required for the experimental program to be conducted in the NLTNIF reactor at ORNL. Preliminary studies indicate that short beam and torsional shear test methods will be useful in evaluating radiation degradation. Development of these and other applicable test methods are continuing. A cooperative program established with laboratories in Japan and in England has resulted in the production and testing of specimens having an identical configuration

  19. Radon barrier: Method of testing airtightness

    DEFF Research Database (Denmark)

    Rasmussen, Torben Valdbjørn; Buch-Hansen, Thomas Cornelius

    2017-01-01

    The test method NBI 167/02 Radon membrane: Test of airtightness can be used for determining the airtightness of a radon barrier as a system solution. The test determines the air infiltration through the radon barrier for a number of levels of air pressure differences. The airflow through versus...... of the barrier with the low air pressure, through a well-defined opening, as a modification of the test method in general. Results, obtained using the improved test method, are shown for a number of radon barriers tested....

  20. The evolution of bacterial cell size: the internal diffusion-constraint hypothesis.

    Science.gov (United States)

    Gallet, Romain; Violle, Cyrille; Fromin, Nathalie; Jabbour-Zahab, Roula; Enquist, Brian J; Lenormand, Thomas

    2017-07-01

    Size is one of the most important biological traits influencing organismal ecology and evolution. However, we know little about the drivers of body size evolution in unicellulars. A long-term evolution experiment (Lenski's LTEE) in which Escherichia coli adapts to a simple glucose medium has shown that not only the growth rate and the fitness of the bacterium increase over time but also its cell size. This increase in size contradicts prominent 'external diffusion' theory (EDC) predicting that cell size should have evolved toward smaller cells. Among several scenarios, we propose and test an alternative 'internal diffusion-constraint' (IDC) hypothesis for cell size evolution. A change in cell volume affects metabolite concentrations in the cytoplasm. The IDC states that a higher metabolism can be achieved by a reduction in the molecular traffic time inside of the cell, by increasing its volume. To test this hypothesis, we studied a population from the LTEE. We show that bigger cells with greater growth and CO 2 production rates and lower mass-to-volume ratio were selected over time in the LTEE. These results are consistent with the IDC hypothesis. This novel hypothesis offers a promising approach for understanding the evolutionary constraints on cell size.

  1. Modeling Run Test Validity: A Meta-Analytic Approach

    National Research Council Canada - National Science Library

    Vickers, Ross

    2002-01-01

    .... This study utilized data from 166 samples (N = 5,757) to test the general hypothesis that differences in testing methods could account for the cross-situational variation in validity. Only runs >2 km...

  2. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    Science.gov (United States)

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  3. A method for testing whether model predictions fall within a prescribed factor of true values, with an application to pesticide leaching

    Science.gov (United States)

    Parrish, Rudolph S.; Smith, Charles N.

    1990-01-01

    A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.

  4. On setting NRC alarm thresholds for inventory differences and process unit loss estimators: Clarifying their statistical basis with hypothesis testing methods and error propagation models from Jaech, Bowen and Bennett and IAEA

    International Nuclear Information System (INIS)

    Ong, L.

    1995-01-01

    Major fuel cycle facilities in the US private sector are required to respond-at predetermined alarm levels-to various special nuclear material loss estimators in the material control and accounting (MC and A) area. This paper presents US Nuclear Regulatory Commission (NRC) policy, along with the underlying statistical rationale, for establishing and inspecting the application of thresholds to detect excessive inventory differences (ID). Accordingly, escalating responsive action must be taken to satisfy NRC's MC and A regulations for low-enriched uranium (LEU) fuel conversion/fabrication plants and LEU enrichment facilities. The establishment of appropriate ID detection thresholds depends on a site-specific goal quantity, a specified probability of detection and the standard error of the ID. Regulatory guidelines for ID significance tests and process control tests conducted by licensees with highly enriched uranium are similarly rationalized in definitive hypothesis testing including null and alternative hypotheses; statistical efforts of the first, second, third, and fourth kinds; and suitable test statistics, uncertainty estimates, prevailing assumptions, and critical values for comparisons. Conceptual approaches are described in the context of significance test considerations and measurement error models including the treatment of so called ''systematic error variance'' effects as observations of random variables in the statistical sense

  5. Introduction to Robust Estimation and Hypothesis Testing

    CERN Document Server

    Wilcox, Rand R

    2012-01-01

    This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations.Introduction to R

  6. Testing the Martian Methane from Cometary Debris Hypothesis: The Unusually Close 24 Jan 2018 Interaction Between Comet C/2007 H2 (Skiff) and Mars

    Science.gov (United States)

    Fries, M.; Archer, D.; Christou, T.; Conrad, P.; Eigenbrode, J.; Kate, I. L. ten; Steele, A.

    2018-01-01

    In previous work we proposed a hypothesis wherein debris moving along cometary orbits interacting with Mars (e.g. meteor showers) may be responsible for transient local increases of methane observed in the martian atmosphere (henceforth 'the hypothesis' ). An examination of the literature of methane detections dating back to 1997 showed that each detection was made, at most, 16 days after an interaction between Mars and one of seven small bodies (six comets and the unusual object 5335 Damocles)[ibid]. Two observations of high-altitude, transient visible plumes on Mars also correlate with cometary interactions, one occurring on the same day as the plume observation and the second observation occurring three days afterwards, and with two of the same seven small bodies. The proposed mechanism for methane production is dissemination of carbon-rich cometary material on infall into Mars' atmosphere followed by methane production via UV photolysis, a process that has been observed in laboratory experiments. Given this set of observations it is necessary and indeed conducive to the scientific process to explore and robustly test the hypothesis.

  7. Do implicit motives and basic psychological needs interact to predict well-being and flow? : Testing a universal hypothesis and a matching hypothesis

    OpenAIRE

    Schüler, Julia; Brandstätter, Veronika; Sheldon, Kennon M.

    2013-01-01

    Self-Determination Theory (Deci and Ryan in Intrinsic motivation and self-determination in human behavior. Plenum Press, New York, 1985) suggests that certain experiences, such as competence, are equally beneficial to everyone’s well-being (universal hypothesis), whereas Motive Disposition Theory (McClelland in Human motivation. Scott, Foresman, Glenview, IL, 1985) predicts that some people, such as those with a high achievement motive, should benefit particularly from such experiences (match...

  8. Tax Evasion, Information Reporting, and the Regressive Bias Hypothesis

    DEFF Research Database (Denmark)

    Boserup, Simon Halphen; Pinje, Jori Veng

    A robust prediction from the tax evasion literature is that optimal auditing induces a regressive bias in effective tax rates compared to statutory rates. If correct, this will have important distributional consequences. Nevertheless, the regressive bias hypothesis has never been tested empirically...

  9. The Neuropsychology of Adolescent Sexual Offending: Testing an Executive Dysfunction Hypothesis.

    Science.gov (United States)

    Morais, Hugo B; Joyal, Christian C; Alexander, Apryl A; Fix, Rebecca L; Burkhart, Barry R

    2016-12-01

    Although executive dysfunctions are commonly hypothesized to contribute to sexual deviance or aggression, evidence of this relationship is scarce and its specificity is unproven, especially among adolescents. The objective of this study was to compare the executive functioning (EF) of adolescents with sexual offense convictions (ASOC) to that of non-sex-delinquents (NSD). A secondary goal was to assess the relationship among specific sexual offense characteristics (i.e., victim age), history of childhood sexual abuse (CSA), and EF. It was hypothesized that as a group, ASOC would present similar EF profiles as NSD. It was further hypothesized that ASOC with child victims would present significantly higher rates of CSA and more severe impairment of EF than ASOC with peer-aged or older victims and NSD. A total of 183 male adolescents (127 ASOC and 56 NSD) were interviewed to collect demographic information, sexual development history, history of CSA, an assessment of living conditions, and history of delinquency and sexual offending. Participants were administered the Delis-Kaplan Executive Functioning System and the Hare Psychopathy Checklist-Youth Version. In accord with the first hypothesis, ASOC and NSD presented similar EF scores, well below normative values. Thus, EF deficits may not characterize the profiles of adolescents with sexual behavior problems. Contrarily to our second hypothesis, however, offending against children and or experiencing CSA were not associated with poorer EF performance. On the contrary, ASOC with child victims obtained significantly higher scores on measures of higher order EF than both ASOC with peer-aged or older victims and NSD. Implications of these results and future directions are discussed. © The Author(s) 2015.

  10. Prolonged minor allograft survival in intravenously primed mice--a test of the veto hypothesis

    International Nuclear Information System (INIS)

    Johnson, L.L.

    1987-01-01

    Experiments were performed to test the hypothesis that veto cells are responsible for the prolonged survival of minor allografts of skin that is observed in recipients primed intravenously with spleen cells from mice syngeneic with the skin donors. This prolonged survival was observed for each of several minor histocompatibility (H) antigens and is antigen-specific. Gamma radiation (3300 rads) abolished the ability of male spleen cells infused i.v. to delay the rejection of male skin grafts (H-Y antigen) on female recipients. However, depletion of Thy-1+ cells from the i.v. infusion failed to abolish the ability to prolong male skin graft survival. Furthermore, the prolonged survival accorded to B6 (H-2b) male skin grafts on CB6F1 (H-2b/H-2d) female recipients given i.v. infusions of B6 male spleen cells extended to BALB/c (H-2d) male skin grafts as well, indicating a lack of MHC restriction. Thus, prolongation of minor allograft survival by i.v. infusion of minor H antigen-bearing spleen cells appears not to depend on veto T cells that others have found to be responsible for the suppression of CTL generation

  11. Temporal variability and cooperative breeding: testing the bet-hedging hypothesis in the acorn woodpecker.

    Science.gov (United States)

    Koenig, Walter D; Walters, Eric L

    2015-10-07

    Cooperative breeding is generally considered an adaptation to ecological constraints on dispersal and independent breeding, usually due to limited breeding opportunities. Although benefits of cooperative breeding are typically thought of in terms of increased mean reproductive success, it has recently been proposed that this phenomenon may be a bet-hedging strategy that reduces variance in reproductive success (fecundity variance) in populations living in highly variable environments. We tested this hypothesis using long-term data on the polygynandrous acorn woodpecker (Melanerpes formicivorus). In general, fecundity variance decreased with increasing sociality, at least when controlling for annual variation in ecological conditions. Nonetheless, decreased fecundity variance was insufficient to compensate for reduced per capita reproductive success of larger, more social groups, which typically suffered lower estimated mean fitness. We did, however, find evidence that sociality in the form of larger group size resulted in increased fitness in years following a small acorn crop due to reduced fecundity variance. Bet-hedging, although not the factor driving sociality in general, may play a role in driving acorn woodpecker group living when acorns are scarce and ecological conditions are poor. © 2015 The Author(s).

  12. There and back again: putting the vectorial movement planning hypothesis to a critical test.

    Science.gov (United States)

    Kobak, Eva-Maria; Cardoso de Oliveira, Simone

    2014-01-01

    Based on psychophysical evidence about how learning of visuomotor transformation generalizes, it has been suggested that movements are planned on the basis of movement direction and magnitude, i.e., the vector connecting movement origin and targets. This notion is also known under the term "vectorial planning hypothesis". Previous psychophysical studies, however, have included separate areas of the workspace for training movements and testing the learning. This study eliminates this confounding factor by investigating the transfer of learning from forward to backward movements in a center-out-and-back task, in which the workspace for both movements is completely identical. Visual feedback allowed for learning only during movements towards the target (forward movements) and not while moving back to the origin (backward movements). When subjects learned the visuomotor rotation in forward movements, initial directional errors in backward movements also decreased to some degree. This learning effect in backward movements occurred predominantly when backward movements featured the same movement directions as the ones trained in forward movements (i.e., when opposite targets were presented). This suggests that learning was transferred in a direction specific way, supporting the notion that movement direction is the most prominent parameter used for motor planning.

  13. 40 CFR 80.3 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Test methods. 80.3 Section 80.3... FUELS AND FUEL ADDITIVES General Provisions § 80.3 Test methods. The lead and phosphorus content of gasoline shall be determined in accordance with test methods set forth in the appendices to this part. [47...

  14. 40 CFR 63.1546 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Test methods. 63.1546 Section 63.1546... Hazardous Air Pollutants for Primary Lead Smelting § 63.1546 Test methods. (a) The following procedure shall....1543(a)(1) through § 63.1543(a)(9) shall be determined according to the following test methods in...

  15. 30 CFR 27.31 - Testing methods.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Testing methods. 27.31 Section 27.31 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS METHANE-MONITORING SYSTEMS Test Requirements § 27.31 Testing methods. A methane...

  16. Fecundity of trees and the colonization-competition hypothesis

    Science.gov (United States)

    James S. Clark; Shannon LaDeau; Ines Ibanez

    2004-01-01

    Colonization-competition trade-offs represent a stabilizing mechanism that is thought to maintain diversity of forest trees. If so, then early-successional species should benefit from high capacity to colonize new sites, and late-successional species should be good competitors. Tests of this hypothesis in forests have been precluded by an inability to estimate...

  17. A novel hypothesis for the binding mode of HERG channel blockers

    International Nuclear Information System (INIS)

    Choe, Han; Nah, Kwang Hoon; Lee, Soo Nam; Lee, Han Sam; Lee, Hui Sun; Jo, Su Hyun; Leem, Chae Hun; Jang, Yeon Jin

    2006-01-01

    We present a new docking model for HERG channel blockade. Our new model suggests three key interactions such that (1) a protonated nitrogen of the channel blocker forms a hydrogen bond with the carbonyl oxygen of HERG residue T623; (2) an aromatic moiety of the channel blocker makes a π-π interaction with the aromatic ring of HERG residue Y652; and (3) a hydrophobic group of the channel blocker forms a hydrophobic interaction with the benzene ring of HERG residue F656. The previous model assumes two interactions such that (1) a protonated nitrogen of the channel blocker forms a cation-π interaction with the aromatic ring of HERG residue Y652; and (2) a hydrophobic group of the channel blocker forms a hydrophobic interaction with the benzene ring of HERG residue F656. To test these models, we classified 69 known HERG channel blockers into eight binding types based on their plausible binding modes, and further categorized them into two groups based on the number of interactions our model would predict with the HERG channel (two or three). We then compared the pIC 5 value distributions between these two groups. If the old hypothesis is correct, the distributions should not differ between the two groups (i.e., both groups show only two binding interactions). If our novel hypothesis is correct, the distributions should differ between Groups 1 and 2. Consistent with our hypothesis, the two groups differed with regard to pIC 5 , and the group having more predicted interactions with the HERG channel had a higher mean pIC 5 value. Although additional work will be required to further validate our hypothesis, this improved understanding of the HERG channel blocker binding mode may help promote the development of in silico predictions methods for identifying potential HERG channel blockers

  18. Efficient compliance with prescribed bounds on operational parameters by means of hypothesis testing using reactor data

    International Nuclear Information System (INIS)

    Sermer, P.; Olive, C.; Hoppe, F.M.

    2000-01-01

    - A common problem in any reactor operations is to comply with a requirement that certain operational parameters are constrained to lie within some prescribed bounds. The fundamental issue which is to be addressed in any compliance description can be stated as follows: The compliance definition, compliance procedures and allowances for uncertainties in data and accompanying methodologies, should be well defined and justifiable. To this end, a mathematical framework for compliance, in which the computed or measured estimates of process parameters are considered random variables, is described in this paper. This allows a statistical formulation of the definition of compliance with licence or otherwise imposed limits. An important aspect of the proposed methodology is that the derived statistical tests are obtained by a Monte Carlo procedure using actual reactor operational data. The implementation of the methodology requires a routine surveillance of the reactor core in order to perform the underlying statistical tests. The additional work required for surveillance is balanced by the fact that the resulting actions on the reactor operations, implemented in station procedures, make the reactor 'safer' by increasing the operating margins. Furthermore, increased margins are also achieved by efficient solution techniques which may allow an increase in reactor power. A rigorous analysis of a compliance problem using statistical hypothesis testing based on extreme value probability distributions and actual reactor operational data leads to effective solutions in the areas of licensing, nuclear safety, reliability and competitiveness of operating nuclear reactors. (author)

  19. Cross-cultural differences in cognitive performance and Spearman's hypothesis : g or c?

    NARCIS (Netherlands)

    Helms-Lorenz, M; Van de Vijver, FJR; Poortinga, YH

    2003-01-01

    Common tests of Spearman's hypothesis, according to which performance differences between cultural groups on cognitive tests increase with their g loadings, confound cognitive complexity and verbal-cultural aspects. The present study attempts to disentangle these components. Two intelligence

  20. A Teaching Method on Basic Chemistry for Freshman : Teaching Method with Pre-test and Post-test

    OpenAIRE

    立木, 次郎; 武井, 庚二

    2003-01-01

    This report deals with a teaching method on basic chemistry for freshman. This teaching method contains guidance and instruction to how to understand basic chemistry. Pre-test and post-test have been put into practice each time. Each test was returned to students at class in the following weeks.

  1. Application of statistical methods to the testing of nuclear counting assemblies

    International Nuclear Information System (INIS)

    Gilbert, J.P.; Friedling, G.

    1965-01-01

    This report describes the application of the hypothesis test theory to the control of the 'statistical purity' and of the stability of the counting batteries used for measurements on activation detectors in research reactors. The principles involved and the experimental results obtained at Cadarache on batteries operating with the reactors PEGGY and AZUR are given. (authors) [fr

  2. 40 CFR 63.344 - Performance test requirements and test methods.

    Science.gov (United States)

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  3. Hypothesis in research

    Directory of Open Access Journals (Sweden)

    Eudaldo Enrique Espinoza Freire

    2018-01-01

    Full Text Available It is intended with this work to have a material with the fundamental contents, which enable the university professor to formulate the hypothesis, for the development of an investigation, taking into account the problem to be solved. For its elaboration, the search of information in primary documents was carried out, such as thesis of degree and reports of research results, selected on the basis of its relevance with the analyzed subject, current and reliability, secondary documents, as scientific articles published in journals of recognized prestige, the selection was made with the same terms as in the previous documents. It presents a conceptualization of the updated hypothesis, its characterization and an analysis of the structure of the hypothesis in which the determination of the variables is deepened. The involvement of the university professor in the teaching-research process currently faces some difficulties, which are manifested, among other aspects, in an unstable balance between teaching and research, which leads to a separation between them.

  4. Testing the stress shadow hypothesis

    Science.gov (United States)

    Felzer, Karen R.; Brodsky, Emily E.

    2005-05-01

    A fundamental question in earthquake physics is whether aftershocks are predominantly triggered by static stress changes (permanent stress changes associated with fault displacement) or dynamic stresses (temporary stress changes associated with earthquake shaking). Both classes of models provide plausible explanations for earthquake triggering of aftershocks, but only the static stress model predicts stress shadows, or regions in which activity is decreased by a nearby earthquake. To test for whether a main shock has produced a stress shadow, we calculate time ratios, defined as the ratio of the time between the main shock and the first earthquake to follow it and the time between the last earthquake to precede the main shock and the first earthquake to follow it. A single value of the time ratio is calculated for each 10 × 10 km bin within 1.5 fault lengths of the main shock epicenter. Large values of the time ratio indicate a long wait for the first earthquake to follow the main shock and thus a potential stress shadow, whereas small values indicate the presence of aftershocks. Simulations indicate that the time ratio test should have sufficient sensitivity to detect stress shadows if they are produced in accordance with the rate and state friction model. We evaluate the 1989 MW 7.0 Loma Prieta, 1992 MW 7.3 Landers, 1994 MW 6.7 Northridge, and 1999 MW 7.1 Hector Mine main shocks. For each main shock, there is a pronounced concentration of small time ratios, indicating the presence of aftershocks, but the number of large time ratios is less than at other times in the catalog. This suggests that stress shadows are not present. By comparing our results to simulations we estimate that we can be at least 98% confident that the Loma Prieta and Landers main shocks did not produce stress shadows and 91% and 84% confident that stress shadows were not generated by the Hector Mine and Northridge main shocks, respectively. We also investigate the long hypothesized existence

  5. Community-Driven Hypothesis Testing: A Solution for the Tragedy of the Anticommons.

    Science.gov (United States)

    Palma-Oliveira, José Manuel; Trump, Benjamin D; Wood, Matthew D; Linkov, Igor

    2018-03-01

    Shared ownership of property and resources is a longstanding challenge throughout history that has been amplifying with the increasing development of industrial and postindustrial societies. Where governments, project planners, and commercial developers seek to develop new infrastructure, industrial projects, and various other land-and resource-intensive tasks, veto power shared by various local stakeholders can complicate or halt progress. Risk communication has been used as an attempt to address stakeholder concerns in these contexts, but has demonstrated shortcomings. These coordination failures between project planners and stakeholders can be described as a specific kind of social dilemma that we describe as the "tragedy of the anticommons." To overcome such dilemmas, we demonstrate how a two-step process can directly address public mistrust of project planners and public perceptions of limited decision-making authority. This approach is examined via two separate empirical field experiments in Portugal and Tunisia, where public resistance and anticommons problems threatened to derail emerging industrial projects. In both applications, an intervention is undertaken to address initial public resistance to such projects, where specific public stakeholders and project sponsors collectively engaged in a hypothesis-testing process to identify and assess human and environmental health risks associated with proposed industrial facilities. These field experiments indicate that a rigorous attempt to address public mistrust and perceptions of power imbalances and change the pay-off structure of the given dilemma may help overcome such anticommons problems in specific cases, and may potentially generate enthusiasm and support for such projects by local publics moving forward. © 2017 Society for Risk Analysis.

  6. Testing the shape-similarity hypothesis between particle-size distribution and water retention for Sicilian soils

    Directory of Open Access Journals (Sweden)

    Chiara Antinoro

    2012-12-01

    Full Text Available Application of the Arya and Paris (AP model to estimate the soil water retention curve requires a detailed description of the particlesize distribution (PSD but limited experimental PSD data are generally determined by the conventional sieve-hydrometer (SH method. Detailed PSDs can be obtained by fitting a continuous model to SH data or performing measurements by the laser diffraction (LD method. The AP model was applied to 40 Sicilian soils for which the PSD was measured by both the SH and LD methods. The scale factor was set equal to 1.38 (procedure AP1 or estimated by a logistical model with parameters gathered from literature (procedure AP2. For both SH and LD data, procedure AP2 allowed a more accurate prediction of the water retention than procedure AP1, confirming that it is not convenient to use a unique value of  for soils that are very different in texture. Despite the differences in PSDs obtained by the SH and LD methods, the water retention predicted by a given procedure (AP1 or AP2 using SH or LD data was characterized by the same level of accuracy. Discrepancies in the estimated water retention from the two PSD measurement methods were attributed to underestimation of the finest diameter frequency obtained by the LD method. Analysis also showed that the soil water retention estimated using the SH method was affected by an estimation bias that could be corrected by an optimization procedure (OPT. Comparison of a-distributions and water retention shape indices obtained by the two methods (SH or LD indicated that the shape-similarity hypothesis is better verified if the traditional sieve-hydrometer data are used to apply the AP model. The optimization procedure allowed more accurate predictions of the water retention curves than the traditional AP1 and AP2 procedures. Therefore, OPT can be considered a valid alternative to the more complex logistical model for estimating the water retention curve of Sicilian soils.

  7. HYPOTHESIS TESTING WITH THE SIMILARITY INDEX

    Science.gov (United States)

    Mulltilocus DNA fingerprinting methods have been used extensively to address genetic issues in wildlife populations. Hypotheses concerning population subdivision and differing levels of diversity can be addressed through the use of the similarity index (S), a band-sharing coeffic...

  8. The origin of Phobos grooves from ejecta launched from impact craters on Mars: Tests of the hypothesis

    Science.gov (United States)

    Ramsley, Kenneth R.; Head, James W.

    2013-01-01

    The surface of the martian moon Phobos is characterized by parallel and intersecting grooves that bear resemblance to secondary crater chains observed on planetary surfaces. Murray (2011) has hypothesized that the main groove-forming process on Phobos is the intersection of Phobos with ejecta from primary impact events on Mars to produce chains of secondary craters. The hypothesis infers a pattern of parallel jets of ejecta, either fluidized or solidified, that break into equally-spaced fragments and disperse uniformly along-trajectory during the flight from Mars to Phobos. At the moment of impact with Phobos the dispersed fragments emplace secondary craters that are aligned along strike corresponding to the flight pattern of ejecta along trajectory. The aspects of the characteristics of grooves on Phobos cited by this hypothesis that might be explained by secondary ejecta include: their observed linearity, parallelism, planar alignment, pitted nature, change in character along strike, and a "zone of avoidance" where ejecta from Mars is predicted not to impact (Murray, 2011). To test the hypothesis we plot precise Keplerian orbits for ejecta from Mars (elliptical and hyperbolic with periapsis located below the surface of Mars). From these trajectories we: (1) set the fragment dispersion limits of ejecta patterns required to emplace the more typically well-organized parallel grooves observed in returned images from Phobos; (2) plot ranges of the ejecta flight durations from Mars to Phobos and map regions of exposure; (3) utilize the same exposure map to observe trajectory-defined ejecta exposure shadows; (4) observe hemispheric exposure in response to shorter and longer durations of ejecta flight; (5) assess the viability of ejecta emplacing the large family of grooves covering most of the northern hemisphere of Phobos; and (6) plot the arrival of parallel lines of ejecta emplacing chains of craters at oblique incident angles. We also assess the bulk volume of

  9. 7 CFR 58.644 - Test methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Test methods. 58.644 Section 58.644 Agriculture... Procedures § 58.644 Test methods. (a) Microbiological. Microbiological determinations shall be made in accordance with the methods described in the latest edition of Standard Methods for the Examination of Dairy...

  10. Unicorns do exist: a tutorial on "proving" the null hypothesis.

    Science.gov (United States)

    Streiner, David L

    2003-12-01

    Introductory statistics classes teach us that we can never prove the null hypothesis; all we can do is reject or fail to reject it. However, there are times when it is necessary to try to prove the nonexistence of a difference between groups. This most often happens within the context of comparing a new treatment against an established one and showing that the new intervention is not inferior to the standard. This article first outlines the logic of "noninferiority" testing by differentiating between the null hypothesis (that which we are trying to nullify) and the "nill" hypothesis (there is no difference), reversing the role of the null and alternate hypotheses, and defining an interval within which groups are said to be equivalent. We then work through an example and show how to calculate sample sizes for noninferiority studies.

  11. Is the Aluminum Hypothesis Dead?

    Science.gov (United States)

    2014-01-01

    The Aluminum Hypothesis, the idea that aluminum exposure is involved in the etiology of Alzheimer disease, dates back to a 1965 demonstration that aluminum causes neurofibrillary tangles in the brains of rabbits. Initially the focus of intensive research, the Aluminum Hypothesis has gradually been abandoned by most researchers. Yet, despite this current indifference, the Aluminum Hypothesis continues to attract the attention of a small group of scientists and aluminum continues to be viewed with concern by some of the public. This review article discusses reasons that mainstream science has largely abandoned the Aluminum Hypothesis and explores a possible reason for some in the general public continuing to view aluminum with mistrust. PMID:24806729

  12. Isotopic Resonance Hypothesis: Experimental Verification by Escherichia coli Growth Measurements

    Science.gov (United States)

    Xie, Xueshu; Zubarev, Roman A.

    2015-03-01

    Isotopic composition of reactants affects the rates of chemical and biochemical reactions. As a rule, enrichment of heavy stable isotopes leads to progressively slower reactions. But the recent isotopic resonance hypothesis suggests that the dependence of the reaction rate upon the enrichment degree is not monotonous. Instead, at some ``resonance'' isotopic compositions, the kinetics increases, while at ``off-resonance'' compositions the same reactions progress slower. To test the predictions of this hypothesis for the elements C, H, N and O, we designed a precise (standard error +/-0.05%) experiment that measures the parameters of bacterial growth in minimal media with varying isotopic composition. A number of predicted resonance conditions were tested, with significant enhancements in kinetics discovered at these conditions. The combined statistics extremely strongly supports the validity of the isotopic resonance phenomenon (p biotechnology, medicine, chemistry and other areas.

  13. Ontogeny of Foraging Competence in Capuchin Monkeys (Cebus capucinus for Easy versus Difficult to Acquire Fruits: A Test of the Needing to Learn Hypothesis.

    Directory of Open Access Journals (Sweden)

    Elizabeth Christine Eadie

    Full Text Available Which factors select for long juvenile periods in some species is not well understood. One potential reason to delay the onset of reproduction is slow food acquisition rates, either due to competition (part of the ecological risk avoidance hypothesis, or due to a decreased foraging efficiency (a version of the needing to learn hypothesis. Capuchins provide a useful genus to test the needing to learn hypothesis because they are known for having long juvenile periods and a difficult-to-acquire diet. Generalized, linear, mixed models with data from 609 fruit forage focal follows on 49, habituated, wild Cebus capucinus were used to test two predictions from the needing-to-learn hypothesis as it applies to fruit foraging skills: 1 capuchin monkeys do not achieve adult foraging return rates for difficult-to-acquire fruits before late in the juvenile period; and 2 variance in return rates for these fruits is at least partially associated with differences in foraging skill. In support of the first prediction, adults, compared with all younger age classes, had significantly higher foraging return rates when foraging for fruits that were ranked as difficult-to-acquire (return rates relative to adults: 0.30-0.41, p-value range 0.008-0.016, indicating that the individuals in the group who have the most foraging experience also achieve the highest return rates. In contrast, and in support of the second prediction, there were no significant differences between age classes for fruits that were ranked as easy to acquire (return rates relative to adults: 0.97-1.42, p-value range 0.086-0.896, indicating that strength and/or skill are likely to affect return rates. In addition, fruits that were difficult to acquire were foraged at nearly identical rates by adult males and significantly smaller (and presumably weaker adult females (males relative to females: 1.01, p = 0.978, while subadult females had much lower foraging efficiency than the similarly-sized but more

  14. Ontogeny of Foraging Competence in Capuchin Monkeys (Cebus capucinus) for Easy versus Difficult to Acquire Fruits: A Test of the Needing to Learn Hypothesis.

    Science.gov (United States)

    Eadie, Elizabeth Christine

    2015-01-01

    Which factors select for long juvenile periods in some species is not well understood. One potential reason to delay the onset of reproduction is slow food acquisition rates, either due to competition (part of the ecological risk avoidance hypothesis), or due to a decreased foraging efficiency (a version of the needing to learn hypothesis). Capuchins provide a useful genus to test the needing to learn hypothesis because they are known for having long juvenile periods and a difficult-to-acquire diet. Generalized, linear, mixed models with data from 609 fruit forage focal follows on 49, habituated, wild Cebus capucinus were used to test two predictions from the needing-to-learn hypothesis as it applies to fruit foraging skills: 1) capuchin monkeys do not achieve adult foraging return rates for difficult-to-acquire fruits before late in the juvenile period; and 2) variance in return rates for these fruits is at least partially associated with differences in foraging skill. In support of the first prediction, adults, compared with all younger age classes, had significantly higher foraging return rates when foraging for fruits that were ranked as difficult-to-acquire (return rates relative to adults: 0.30-0.41, p-value range 0.008-0.016), indicating that the individuals in the group who have the most foraging experience also achieve the highest return rates. In contrast, and in support of the second prediction, there were no significant differences between age classes for fruits that were ranked as easy to acquire (return rates relative to adults: 0.97-1.42, p-value range 0.086-0.896), indicating that strength and/or skill are likely to affect return rates. In addition, fruits that were difficult to acquire were foraged at nearly identical rates by adult males and significantly smaller (and presumably weaker) adult females (males relative to females: 1.01, p = 0.978), while subadult females had much lower foraging efficiency than the similarly-sized but more experienced

  15. [Inappropriate test methods in allergy].

    Science.gov (United States)

    Kleine-Tebbe, J; Herold, D A

    2010-11-01

    Inappropriate test methods are increasingly utilized to diagnose allergy. They fall into two categories: I. Tests with obscure theoretical basis, missing validity and lacking reproducibility, such as bioresonance, electroacupuncture, applied kinesiology and the ALCAT-test. These methods lack both the technical and clinical validation needed to justify their use. II. Tests with real data, but misleading interpretation: Detection of IgG or IgG4-antibodies or lymphocyte proliferation tests to foods do not allow to separate healthy from diseased subjects, neither in case of food intolerance, allergy or other diagnoses. The absence of diagnostic specificity induces many false positive findings in healthy subjects. As a result unjustified diets might limit quality of life and lead to malnutrition. Proliferation of lymphocytes in response to foods can show elevated rates in patients with allergies. These values do not allow individual diagnosis of hypersensitivity due to their broad variation. Successful internet marketing, infiltration of academic programs and superficial reporting by the media promote the popularity of unqualified diagnostic tests; also in allergy. Therefore, critical observation and quick analysis of and clear comments to unqualified methods by the scientific medical societies are more important than ever.

  16. 30 CFR 36.41 - Testing methods.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Testing methods. 36.41 Section 36.41 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF... Requirements § 36.41 Testing methods. Mobile diesel-powered transportation equipment submitted for...

  17. The estrogen hypothesis of schizophrenia implicates glucose metabolism

    DEFF Research Database (Denmark)

    Olsen, Line; Hansen, Thomas; Jakobsen, Klaus D

    2008-01-01

    expression studies have indicated an equally large set of candidate genes that only partially overlap linkage genes. A thorough assessment, beyond the resolution of current GWA studies, of the disease risk conferred by the numerous schizophrenia candidate genes is a daunting and presently not feasible task....... We undertook these challenges by using an established clinical paradigm, the estrogen hypothesis of schizophrenia, as the criterion to select candidates among the numerous genes experimentally implicated in schizophrenia. Bioinformatic tools were used to build and priorities the signaling networks...... implicated by the candidate genes resulting from the estrogen selection. We identified ten candidate genes using this approach that are all active in glucose metabolism and particularly in the glycolysis. Thus, we tested the hypothesis that variants of the glycolytic genes are associated with schizophrenia...

  18. The Lehman Sisters Hypothesis: an exploration of literature and bankers

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractAbstract This article tests the Lehman Sisters Hypothesis in two complementary, although incomplete ways. It reviews the diverse empirical literature in behavioral, experimental, and neuroeconomics as well as related fields of behavioral research. And it presents the findings from an

  19. The Lehman Sisters Hypothesis: an exploration of literature and bankers

    NARCIS (Netherlands)

    I.P. van Staveren (Irene)

    2012-01-01

    textabstractThis article tests the Lehman Sisters Hypothesis in two complementary, although incomplete ways. It reviews the diverse empirical literature in behavioural, experimental, and neuroeconomics as well as related fields of behavioural research. And it presents the findings from an

  20. Testing the phenotype-linked fertility hypothesis in the presence and absence of inbreeding

    Czech Academy of Sciences Publication Activity Database

    Forstmeier, W.; Ihle, M.; Opatová, Pavlína; Martin, K.; Knief, U.; Albrechtová, Jana; Albrecht, Tomáš; Kempenaers, B.

    2017-01-01

    Roč. 30, č. 5 (2017), s. 968-976 ISSN 1010-061X R&D Projects: GA ČR(CZ) GAP506/12/2472 Institutional support: RVO:68081766 Keywords : display behaviour * mate choice * phenotype-linked fertility hypothesis * precopulatory traits * sexual selection * sperm abnormalities * sperm quality * sperm velocity Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Genetics and heredity (medical genetics to be 3) Impact factor: 2.792, year: 2016

  1. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  2. Standard Test Methods for Wet Insulation Integrity Testing of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 These test methods provide procedures to determine the insulation resistance of a photovoltaic (PV) module, i.e. the electrical resistance between the module's internal electrical components and its exposed, electrically conductive, non-current carrying parts and surfaces. 1.2 The insulation integrity procedures are a combination of wet insulation resistance and wet dielectric voltage withstand test procedures. 1.3 These procedures are similar to and reference the insulation integrity test procedures described in Test Methods E 1462, with the difference being that the photovoltaic module under test is immersed in a wetting solution during the procedures. 1.4 These test methods do not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of these test methods. 1.5 The values stated in SI units are to be regarded as the standard. 1.6 There is no similar or equivalent ISO standard. 1.7 This standard does not purport to address all of the safety conce...

  3. Simultaneity modeling analysis of the environmental Kuznets curve hypothesis

    International Nuclear Information System (INIS)

    Ben Youssef, Adel; Hammoudeh, Shawkat; Omri, Anis

    2016-01-01

    The environmental Kuznets curve (EKC) hypothesis has been recognized in the environmental economics literature since the 1990's. Various statistical tests have been used on time series, cross section and panel data related to single and groups of countries to validate this hypothesis. In the literature, the validation has always been conducted by using a single equation. However, since both the environment and income variables are endogenous, the estimation of a single equation model when simultaneity exists produces inconsistent and biased estimates. Therefore, we formulate simultaneous two-equation models to investigate the EKC hypothesis for fifty-six countries, using annual panel data from 1990 to 2012, with the end year is determined by data availability for the panel. To make the panel data analysis more homogeneous, we investigate this issue for a three income-based panels (namely, high-, middle-, and low-income panels) given several explanatory variables. Our results indicate that there exists a bidirectional causality between economic growth and pollution emissions in the overall panels. We also find that the relationship is nonlinear and has an inverted U-shape for all the considered panels. Policy implications are provided. - Highlights: • We have given a new look for the validity of the EKC hypothesis. • We formulate two-simultaneous equation models to validate this hypothesis for fifty-six countries. • We find a bidirectional causality between economic growth and pollution emissions. • We also discover an inverted U-shaped between environmental degradation and economic growth. • This relationship varies at different stages of economic development.

  4. Biological fingerprint of high LET radiation. Brenner hypothesis

    International Nuclear Information System (INIS)

    Kodama, Yoshiaki; Awa, Akio; Nakamura, Nori

    1997-01-01

    Hypothesis by Brenner et al. (1994) that in chromosome aberrations in human peripheral lymphocytes induced by radiation exposure, F value (dicentrics/rings) differs dependently on the LET and can be a biomarker of high LET radiation like neutron and α-ray was reviewed and evaluated as follows. Radiation and chromosome aberrations; in this section, unstable aberrations like dicentric and rings (r) and stable ones like translocation and pericentric inversions were described. F value. Brenner hypothesis. Bauchinger's refutation. F value determined by FISH method; here, FISH is fluorescence in situ hybridization. F value in studies by author's Radiation Effect Research Facility. Frequency of chromosome aberration in A-bomb survivors and ESR (ESR: electron spin resonance). The cause for fluctuation of F values. The Brenner hypothesis could not be supported by studies by author's facility, suggesting that the rate of inter-chromosomal and intra-chromosomal exchange abnormalities can not be distinguishable by the radiation LET. This might be derived from the difference in detection technology of r rather than in LET. (K.H.)

  5. The insignificance of statistical significance testing

    Science.gov (United States)

    Johnson, Douglas H.

    1999-01-01

    Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.

  6. Migratory birds, the H5N1 influenza virus and the scientific method

    Directory of Open Access Journals (Sweden)

    Stilianakis Nikolaos I

    2008-05-01

    Full Text Available Abstract Background The role of migratory birds and of poultry trade in the dispersal of highly pathogenic H5N1 is still the topic of intense and controversial debate. In a recent contribution to this journal, Flint argues that the strict application of the scientific method can help to resolve this issue. Discussion We argue that Flint's identification of the scientific method with null hypothesis testing is misleading and counterproductive. There is far more to science than the testing of hypotheses; not only the justification, bur also the discovery of hypotheses belong to science. We also show why null hypothesis testing is weak and that Bayesian methods are a preferable approach to statistical inference. Furthermore, we criticize the analogy put forward by Flint between involuntary transport of poultry and long-distance migration. Summary To expect ultimate answers and unequivocal policy guidance from null hypothesis testing puts unrealistic expectations on a flawed approach to statistical inference and on science in general.

  7. Testing for purchasing power parity in the long-run for ASEAN-5

    Science.gov (United States)

    Choji, Niri Martha; Sek, Siok Kun

    2017-04-01

    For more than a decade, there has been a substantial interest in testing for the validity of the purchasing power parity (PPP) hypothesis empirically. This paper performs a test on revealing a long-run relative Purchasing Power Parity for a group of ASEAN-5 countries for the period of 1996-2016 using monthly data. For this purpose, we used the Pedroni co-integration method to test for the long-run hypothesis of purchasing power parity. We first tested for the stationarity of the variables and found that the variables are non-stationary at levels but stationary at first difference. Results of the Pedroni test rejected the null hypothesis of no co-integration meaning that we have enough evidence to support PPP in the long-run for the ASEAN-5 countries over the period of 1996-2016. In other words, the rejection of null hypothesis implies a long-run relation between nominal exchange rates and relative prices.

  8. Myasthenia Gravis: Tests and Diagnostic Methods

    Science.gov (United States)

    ... Focus on MG Newsletter MG Quarterly Test & Diagnostic methods In addition to a complete medical and neurological ... How can I help? About MGFA Test & Diagnostic methods Treatment for MG FAQ's Upcoming Events 2018 MG ...

  9. Floral and mating system divergence in secondary sympatry: testing an alternative hypothesis to reinforcement in Clarkia

    Science.gov (United States)

    Briscoe Runquist, Ryan D.; Moeller, David A.

    2014-01-01

    Background and Aims Reproductive character displacement (RCD) is often an important signature of reinforcement when partially cross-compatible taxa meet in secondary sympatry. In this study, floral evolution is examined during the Holocene range expansion of Clarkia xantiana subsp. parviflora from eastern Pleistocene refugia to a western zone of sympatry with its sister taxon, subsp. xantiana. Floral divergence between the two taxa is greater in sympatry than allopatry. The goal was to test an alternative hypothesis to reinforcement – that floral divergence of sympatric genotypes is simply a by-product of adaptation to pollination environments that differ between the allopatric and sympatric portions of the subspecies' range. Methods Floral trait data from two common garden studies were used to examine floral divergence between sympatric and allopatric regions and among phylogeographically defined lineages. In natural populations of C. x. parviflora, the magnitude of pollen limitation and reproductive assurance were quantified across its west-to-east range. Potted sympatric and allopatric genotypes were also reciprocally translocated between geographical regions to distinguish between the effects of floral phenotype versus contrasting pollinator environments on reproductive ecology. Key Results Sympatric populations are considerably smaller flowered with reduced herkogamy. Pollen limitation and the reproductive assurance value of selfing are greater in sympatric than in allopatric populations. Most significantly, reciprocal translocation experiments showed these differences in reproductive ecology cannot be attributed to contrasting pollinator environments between the sympatric and allopatric regions, but instead reflect the effects of flower size on pollinator attraction. Conclusions Floral evolution occurred during the westward range expansion of parviflora, particularly in the zone of sympatry with xantiana. No evidence was found that strongly reduced flower

  10. Development of Dissolution Test Method for Drotaverine ...

    African Journals Online (AJOL)

    Development of Dissolution Test Method for Drotaverine ... Methods: Sink conditions, drug stability and specificity in different dissolution media were tested to optimize a dissolution test .... test by Prism 4.0 software, and differences between ...

  11. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  12. Eddy currents non-destructive testing. use of a numeric/symbolic method to separate and characterize the transitions of a signal

    International Nuclear Information System (INIS)

    Benas, J.C.; Lefevre, F.; Gaillard, P.; Georgel, B.

    1995-01-01

    This paper presents an original numeric/symbolic method for solving an inverse problem in the field of non-destructive testing. The purpose of this method is to characterize the transitions of a signal even when they are superimposed. Its principle is to solve as many direct problems as necessary to obtain the solution, and to use some hypothesis to manage the reasoning of the process. The direct problem calculation yields to a 'model signal', and the solution is reached when the model signal is close to the measured one. This method calculates the directions of minimization thanks to a symbolic reasoning based on the peaks of the residual signal. The results of the method are good and seem very promising. (authors). 13 refs., 13 figs., 5 tabs

  13. Hypothesis, Prediction, and Conclusion: Using Nature of Science Terminology Correctly

    Science.gov (United States)

    Eastwell, Peter

    2012-01-01

    This paper defines the terms "hypothesis," "prediction," and "conclusion" and shows how to use the terms correctly in scientific investigations in both the school and science education research contexts. The scientific method, or hypothetico-deductive (HD) approach, is described and it is argued that an understanding of the scientific method,…

  14. The environmental convergence hypothesis: Carbon dioxide emissions according to the source of energy

    International Nuclear Information System (INIS)

    Herrerias, M.J.

    2013-01-01

    The aim of this paper is to investigate the environmental convergence hypothesis in carbon dioxide emissions for a large group of developed and developing countries from 1980 to 2009. The novel aspect of this work is that we distinguish among carbon dioxide emissions according to the source of energy (coal, natural gas and petroleum) instead of considering the aggregate measure of per capita carbon dioxide emissions, where notable interest is given to the regional dimension due to the application of new club convergence tests. This allows us to determine the convergence behaviour of emissions in a more precise way and to detect it according to the source of energy used, thereby helping to address the environmental targets. More specifically, the convergence hypothesis is examined with a pair-wise test and another one is used to test for the existence of club convergence. Our results from using the pair-wise test indicate that carbon dioxide emissions for each type of energy diverge. However, club convergence is found for a large group of countries, although some still display divergence. These findings point to the need to apply specific environmental policies to each club detected, since specific countries converge to different clubs. - Highlights: • The environmental convergence hypothesis is investigated across countries. • We perform a pair-wise test and a club convergence test. • Results from the first of these two tests suggest that carbon dioxide emissions are diverging. • However, we find that carbon dioxide emissions are converging within groups of countries. • Active environmental policies are required

  15. A test of the nest sanitation hypothesis for the evolution of foreign egg rejection in an avian brood parasite rejecter host species.

    Science.gov (United States)

    Luro, Alec B; Hauber, Mark E

    2017-04-01

    Hosts of avian brood parasites have evolved diverse defenses to avoid the costs associated with raising brood parasite nestlings. In egg ejection, the host recognizes and removes foreign eggs laid in its nest. Nest sanitation, a behavior similar in motor pattern to egg ejection, has been proposed repeatedly as a potential pre-adaptation to egg ejection. Here, we separately placed blue 3D-printed, brown-headed cowbird (Molothrus ater) eggs known to elicit interindividual variation in ejection responses and semi-natural leaves into American robins' (Turdus migratorius) nests to test proximate predictions that (1) rejecter hosts should sanitize debris from nests more frequently and consistently than accepter hosts and (2) hosts that sanitize their nests of debris prior to the presentation of a foreign egg will be more likely to eject the foreign egg. Egg ejection responses were highly repeatable within individuals yet variable between them, but were not influenced by prior exposure to debris, nor related to sanitation tendencies as a whole, because nearly all individuals sanitized their nests. Additionally, we collected published data for eight different host species to test for a potential positive correlation between sanitation and egg ejection. We found no significant correlation between nest sanitation and egg ejection rates; however, our comparative analysis was limited to a sample size of 8, and we advise that more data from additional species are necessary to properly address interspecific tests of the pre-adaptation hypothesis. In lack of support for the nest sanitation hypothesis, our study suggests that, within individuals, foreign egg ejection is distinct from nest sanitation tendencies, and sanitation and foreign egg ejection may not correlate across species.

  16. Standard test methods for arsenic in uranium hexafluoride

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 These test methods are applicable to the determination of total arsenic in uranium hexafluoride (UF6) by atomic absorption spectrometry. Two test methods are given: Test Method A—Arsine Generation-Atomic Absorption (Sections 5-10), and Test Method B—Graphite Furnace Atomic Absorption (Appendix X1). 1.2 The test methods are equivalent. The limit of detection for each test method is 0.1 μg As/g U when using a sample containing 0.5 to 1.0 g U. Test Method B does not have the complete collection details for precision and bias data thus the method appears as an appendix. 1.3 Test Method A covers the measurement of arsenic in uranyl fluoride (UO2F2) solutions by converting arsenic to arsine and measuring the arsine vapor by flame atomic absorption spectrometry. 1.4 Test Method B utilizes a solvent extraction to remove the uranium from the UO2F2 solution prior to measurement of the arsenic by graphite furnace atomic absorption spectrometry. 1.5 Both insoluble and soluble arsenic are measured when UF6 is...

  17. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  18. 40 CFR 63.465 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Test methods. 63.465 Section 63.465... Halogenated Solvent Cleaning § 63.465 Test methods. (a) Except as provided in paragraphs (f) and (g) of this... Reference Method 307 in appendix A of this part. (b) Except as provided in paragraph (g) of this section for...

  19. Standard test method for dynamic tear testing of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1983-01-01

    1.1 This test method covers the dynamic tear (DT) test using specimens that are 3/16 in. to 5/8 in. (5 mm to 16 mm) inclusive in thickness. 1.2 This test method is applicable to materials with a minimum thickness of 3/16 in. (5 mm). 1.3 The pressed-knife procedure described for sharpening the notch tip generally limits this test method to materials with a hardness level less than 36 HRC. Note 1—The designation 36 HRC is a Rockwell hardness number of 36 on Rockwell C scale as defined in Test Methods E 18. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  20. Test of determination of nucleon structure functions in the hypothesis of scalar di-quark existence

    International Nuclear Information System (INIS)

    Tavernier, P.; Dugne, J.J.

    1992-01-01

    The authors present the nucleon structure functions that have been obtained in the hypothesis of existence of a scalar di-quark, progressively broken by increasing energy of electromagnetic probe (Stockolm model). Comparisons with other models and experimental results are presented. 20 figs

  1. Order information and free recall: evaluating the item-order hypothesis.

    Science.gov (United States)

    Mulligan, Neil W; Lozito, Jeffrey P

    2007-05-01

    The item-order hypothesis proposes that order information plays an important role in recall from long-term memory, and it is commonly used to account for the moderating effects of experimental design in memory research. Recent research (Engelkamp, Jahn, & Seiler, 2003; McDaniel, DeLosh, & Merritt, 2000) raises questions about the assumptions underlying the item-order hypothesis. Four experiments tested these assumptions by examining the relationship between free recall and order memory for lists of varying length (8, 16, or 24 unrelated words or pictures). Some groups were given standard free-recall instructions, other groups were explicitly instructed to use order information in free recall, and other groups were given free-recall tests intermixed with tests of order memory (order reconstruction). The results for short lists were consistent with the assumptions of the item-order account. For intermediate-length lists, explicit order instructions and intermixed order tests made recall more reliant on order information, but under standard conditions, order information played little role in recall. For long lists, there was little evidence that order information contributed to recall. In sum, the assumptions of the item-order account held for short lists, received mixed support with intermediate lists, and received no support for longer lists.

  2. The community conditioning hypothesis and its application to environmental toxicology

    International Nuclear Information System (INIS)

    Matthews, R.A.; Landis, W.G.; Matthews, G.B.

    1996-01-01

    In this paper the authors present the community conditions hypothesis, ecological communities retain information bout events in their history. This hypothesis, which was derived from the concept of nonequilibrium community ecology, was developed as a framework for understanding the persistence of dose-related responses in multispecies toxicity tests. The authors present data from three standardized aquatic microcosm (SAM) toxicity tests using the water-soluble fractions from turbine fuels (Jet-A, JP-4, and JP-8). In all three tests, the toxicants depressed the Daphnia populations for several weeks, which resulted in algal blooms in the dosed microcosms due to lower predation rates. These effects were short-lived, and by the second and third months of the experiments, the Daphnia populations appeared to have recovered. However, multivariate analysis of the data released dose/response differences that reappeared during the later part of the tests, often due to differences in other consumers (rotifers, ostracods, ciliates), or algae that are not normally consumed (filamentous green algae and bluegreen algae). The findings are consistent with ecological theories that describe communities as the unique production of their etiologies. The implications of this to environmental toxicology are that almost all environmental events leave lasting effects, whether or not they have observed them

  3. The Feldstein-Horioka Hypothesis in Countries with Varied Levels of Economic Development

    Directory of Open Access Journals (Sweden)

    Piotr Misztal

    2011-06-01

    Full Text Available This article aims to analyse the Feldstein-Horioka hypothesis, which suggests a strong correlation between investments and savings in advanced economies. Additionally, the analysis of the Feldstein- Horioka hypothesis was expanded to include emerging markets and developing economies in order to provide a thorough analysis of this issue.1 The paper utilises a research method based on bibliographic studies in macroeconomics and international finances as well as econometric methods (the vector autoregressive model - VAR. All statistical data used in the paper are taken from the statistical database of the International Monetary Fund (World Economic Outlook Database.

  4. Testing the snake-detection hypothesis: larger early posterior negativity in humans to pictures of snakes than to pictures of other reptiles, spiders and slugs.

    Science.gov (United States)

    Van Strien, Jan W; Franken, Ingmar H A; Huijding, Jorg

    2014-01-01

    According to the snake detection hypothesis (Isbell, 2006), fear specifically of snakes may have pushed evolutionary changes in the primate visual system allowing pre-attentional visual detection of fearful stimuli. A previous study demonstrated that snake pictures, when compared to spiders or bird pictures, draw more early attention as reflected by larger early posterior negativity (EPN). Here we report two studies that further tested the snake detection hypothesis. In Study 1, we tested whether the enlarged EPN is specific for snakes or also generalizes to other reptiles. Twenty-four healthy, non-phobic women watched the random rapid serial presentation of snake, crocodile, and turtle pictures. The EPN was scored as the mean activity at occipital electrodes (PO3, O1, Oz, PO4, O2) in the 225-300 ms time window after picture onset. The EPN was significantly larger for snake pictures than for pictures of the other reptiles. In Study 2, we tested whether disgust plays a role in the modulation of the EPN and whether preferential processing of snakes also can be found in men. 12 men and 12 women watched snake, spider, and slug pictures. Both men and women exhibited the largest EPN amplitudes to snake pictures, intermediate amplitudes to spider pictures and the smallest amplitudes to slug pictures. Disgust ratings were not associated with EPN amplitudes. The results replicate previous findings and suggest that ancestral priorities modulate the early capture of visual attention.

  5. Biomechanical spinal growth modulation and progressive adolescent scoliosis – a test of the 'vicious cycle' pathogenetic hypothesis: Summary of an electronic focus group debate of the IBSE

    Directory of Open Access Journals (Sweden)

    Burwell R Geoffrey

    2006-10-01

    Full Text Available Abstract There is no generally accepted scientific theory for the causes of adolescent idiopathic scoliosis (AIS. As part of its mission to widen understanding of scoliosis etiology, the International Federated Body on Scoliosis Etiology (IBSE introduced the electronic focus group (EFG as a means of increasing debate on knowledge of important topics. This has been designated as an on-line Delphi discussion. The text for this debate was written by Dr Ian A Stokes. It evaluates the hypothesis that in progressive scoliosis vertebral body wedging during adolescent growth results from asymmetric muscular loading in a "vicious cycle" (vicious cycle hypothesis of pathogenesis by affecting vertebral body growth plates (endplate physes. A frontal plane mathematical simulation tested whether the calculated loading asymmetry created by muscles in a scoliotic spine could explain the observed rate of scoliosis increase by measuring the vertebral growth modulation by altered compression. The model deals only with vertebral (not disc wedging. It assumes that a pre-existing scoliosis curve initiates the mechanically-modulated alteration of vertebral body growth that in turn causes worsening of the scoliosis, while everything else is anatomically and physiologically 'normal' The results provide quantitative data consistent with the vicious cycle hypothesis. Dr Stokes' biomechanical research engenders controversy. A new speculative concept is proposed of vertebral symphyseal dysplasia with implications for Dr Stokes' research and the etiology of AIS. What is not controversial is the need to test this hypothesis using additional factors in his current model and in three-dimensional quantitative models that incorporate intervertebral discs and simulate thoracic as well as lumbar scoliosis. The growth modulation process in the vertebral body can be viewed as one type of the biologic phenomenon of mechanotransduction. In certain connective tissues this involves the

  6. Reverse hypothesis machine learning a practitioner's perspective

    CERN Document Server

    Kulkarni, Parag

    2017-01-01

    This book introduces a paradigm of reverse hypothesis machines (RHM), focusing on knowledge innovation and machine learning. Knowledge- acquisition -based learning is constrained by large volumes of data and is time consuming. Hence Knowledge innovation based learning is the need of time. Since under-learning results in cognitive inabilities and over-learning compromises freedom, there is need for optimal machine learning. All existing learning techniques rely on mapping input and output and establishing mathematical relationships between them. Though methods change the paradigm remains the same—the forward hypothesis machine paradigm, which tries to minimize uncertainty. The RHM, on the other hand, makes use of uncertainty for creative learning. The approach uses limited data to help identify new and surprising solutions. It focuses on improving learnability, unlike traditional approaches, which focus on accuracy. The book is useful as a reference book for machine learning researchers and professionals as ...

  7. 40 CFR 59.207 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Test methods. 59.207 Section 59.207 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL... Compound Emission Standards for Consumer Products § 59.207 Test methods. Each manufacturer or importer...

  8. [Seed quality test methods of Paeonia suffruticosa].

    Science.gov (United States)

    Cao, Ya-Yue; Zhu, Zai-Biao; Guo, Qiao-Sheng; Liu, Li; Wang, Chang-Lin

    2014-11-01

    In order to optimize the testing methods for Paeonia suffruticosa seed quality, and provide basis for establishing seed testing rules and seed quality standard of P. suffruticosa. The seed quality of P. suffruticosa from different producing areas was measured based on the related seed testing regulations. The seed testing methods for quality items of P. suffruticosa was established preliminarily. The samples weight of P. suffruticosa was at least 7 000 g for purity analysis and was at least 700 g for test. The phenotypic observation and size measurement were used for authenticity testing. The 1 000-seed weight was determined by 100-seed method, and the water content was carried out by low temperature drying method (10 hours). After soaking in distilled water for 24 h, the seeds was treated with different temperature stratifications of day and night (25 degrees C/20 degrees C, day/night) in the dark for 60 d. After soaking in the liquor of GA3 300 mg x L(-1) for 24 h, the P. suffruticos seeds were cultured in wet sand at 15 degrees C for 12-60 days for germination testing. Seed viability was tested by TlC method.

  9. Standard Test Method for Sandwich Corrosion Test

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method defines the procedure for evaluating the corrosivity of aircraft maintenance chemicals, when present between faying surfaces (sandwich) of aluminum alloys commonly used for aircraft structures. This test method is intended to be used in the qualification and approval of compounds employed in aircraft maintenance operations. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Specific hazard statements appear in Section 9.

  10. Evidence for Enhanced Mutualism Hypothesis: Solidago canadensis Plants from Regular Soils Perform Better

    OpenAIRE

    Sun, Zhen-Kai; He, Wei-Ming

    2010-01-01

    The important roles of plant-soil microbe interactions have been documented in exotic plant invasion, but we know very little about how soil mutualists enhance this process (i.e. enhanced mutualism hypothesis). To test this hypothesis we conducted two greenhouse experiments with Solidago canadensis (hereafter Solidago), an invasive forb from North America, and Stipa bungeana (hereafter Stipa), a native Chinese grass. In a germination experiment, we found soil microbes from the rhizospheres of...

  11. Test Methods for Robot Agility in Manufacturing.

    Science.gov (United States)

    Downs, Anthony; Harrison, William; Schlenoff, Craig

    2016-01-01

    The paper aims to define and describe test methods and metrics to assess industrial robot system agility in both simulation and in reality. The paper describes test methods and associated quantitative and qualitative metrics for assessing robot system efficiency and effectiveness which can then be used for the assessment of system agility. The paper describes how the test methods were implemented in a simulation environment and real world environment. It also shows how the metrics are measured and assessed as they would be in a future competition. The test methods described in this paper will push forward the state of the art in software agility for manufacturing robots, allowing small and medium manufacturers to better utilize robotic systems. The paper fulfills the identified need for standard test methods to measure and allow for improvement in software agility for manufacturing robots.

  12. Testing the Sensory Drive Hypothesis: Geographic variation in echolocation frequencies of Geoffroy's horseshoe bat (Rhinolophidae: Rhinolophus clivosus).

    Science.gov (United States)

    Jacobs, David S; Catto, Sarah; Mutumi, Gregory L; Finger, Nikita; Webala, Paul W

    2017-01-01

    Geographic variation in sensory traits is usually influenced by adaptive processes because these traits are involved in crucial life-history aspects including orientation, communication, lineage recognition and mate choice. Studying this variation can therefore provide insights into lineage diversification. According to the Sensory Drive Hypothesis, lineage diversification may be driven by adaptation of sensory systems to local environments. It predicts that acoustic signals vary in association with local climatic conditions so that atmospheric attenuation is minimized and transmission of the signals maximized. To test this prediction, we investigated the influence of climatic factors (specifically relative humidity and temperature) on geographic variation in the resting frequencies of the echolocation pulses of Geoffroy's horseshoe bat, Rhinolophus clivosus. If the evolution of phenotypic variation in this lineage tracks climate variation, human induced climate change may lead to decreases in detection volumes and a reduction in foraging efficiency. A complex non-linear interaction between relative humidity and temperature affects atmospheric attenuation of sound and principal components composed of these correlated variables were, therefore, used in a linear mixed effects model to assess their contribution to observed variation in resting frequencies. A principal component composed predominantly of mean annual temperature (factor loading of -0.8455) significantly explained a proportion of the variation in resting frequency across sites (P < 0.05). Specifically, at higher relative humidity (around 60%) prevalent across the distribution of R. clivosus, increasing temperature had a strong negative effect on resting frequency. Climatic factors thus strongly influence acoustic signal divergence in this lineage, supporting the prediction of the Sensory Drive Hypothesis. The predicted future increase in temperature due to climate change is likely to decrease the

  13. Testing the Sensory Drive Hypothesis: Geographic variation in echolocation frequencies of Geoffroy's horseshoe bat (Rhinolophidae: Rhinolophus clivosus.

    Directory of Open Access Journals (Sweden)

    David S Jacobs

    Full Text Available Geographic variation in sensory traits is usually influenced by adaptive processes because these traits are involved in crucial life-history aspects including orientation, communication, lineage recognition and mate choice. Studying this variation can therefore provide insights into lineage diversification. According to the Sensory Drive Hypothesis, lineage diversification may be driven by adaptation of sensory systems to local environments. It predicts that acoustic signals vary in association with local climatic conditions so that atmospheric attenuation is minimized and transmission of the signals maximized. To test this prediction, we investigated the influence of climatic factors (specifically relative humidity and temperature on geographic variation in the resting frequencies of the echolocation pulses of Geoffroy's horseshoe bat, Rhinolophus clivosus. If the evolution of phenotypic variation in this lineage tracks climate variation, human induced climate change may lead to decreases in detection volumes and a reduction in foraging efficiency. A complex non-linear interaction between relative humidity and temperature affects atmospheric attenuation of sound and principal components composed of these correlated variables were, therefore, used in a linear mixed effects model to assess their contribution to observed variation in resting frequencies. A principal component composed predominantly of mean annual temperature (factor loading of -0.8455 significantly explained a proportion of the variation in resting frequency across sites (P < 0.05. Specifically, at higher relative humidity (around 60% prevalent across the distribution of R. clivosus, increasing temperature had a strong negative effect on resting frequency. Climatic factors thus strongly influence acoustic signal divergence in this lineage, supporting the prediction of the Sensory Drive Hypothesis. The predicted future increase in temperature due to climate change is likely to

  14. Multiple hypothesis tracking based extraction of airway trees from CT data

    DEFF Research Database (Denmark)

    Raghavendra, Selvan; Petersen, Jens; de Bruijne, Marleen

    Segmentation of airway trees from CT scans of lungs has important clinical applications, in relation to the diagnosis of chronic obstructive pulmonary disease (COPD). Here we present a method based on multiple hypothesis tracking (MHT) and template matching, originally devised for vessel...... segmentation, to extract airway trees. Idealized tubular templates are constructed and ranked using scores assigned based on the image data. Several such regularly spaced hypotheses are used in constructing a hypothesis tree, which is then traversed to obtain improved segmentation results....

  15. On the hypothesis-free testing of metabolite ratios in genome-wide and metabolome-wide association studies

    Directory of Open Access Journals (Sweden)

    Petersen Ann-Kristin

    2012-06-01

    Full Text Available Abstract Background Genome-wide association studies (GWAS with metabolic traits and metabolome-wide association studies (MWAS with traits of biomedical relevance are powerful tools to identify the contribution of genetic, environmental and lifestyle factors to the etiology of complex diseases. Hypothesis-free testing of ratios between all possible metabolite pairs in GWAS and MWAS has proven to be an innovative approach in the discovery of new biologically meaningful associations. The p-gain statistic was introduced as an ad-hoc measure to determine whether a ratio between two metabolite concentrations carries more information than the two corresponding metabolite concentrations alone. So far, only a rule of thumb was applied to determine the significance of the p-gain. Results Here we explore the statistical properties of the p-gain through simulation of its density and by sampling of experimental data. We derive critical values of the p-gain for different levels of correlation between metabolite pairs and show that B/(2*α is a conservative critical value for the p-gain, where α is the level of significance and B the number of tested metabolite pairs. Conclusions We show that the p-gain is a well defined measure that can be used to identify statistically significant metabolite ratios in association studies and provide a conservative significance cut-off for the p-gain for use in future association studies with metabolic traits.

  16. Testing Happiness Hypothesis among the Elderly

    Directory of Open Access Journals (Sweden)

    Rossi Máximo

    2008-08-01

    Full Text Available We use a rich data set that allows us to test different happiness hypotheses employing four methodological approaches. We find that older people in Uruguay have a tendency to report themselves happy when they are married, when they have higher standards of health and when they earn higher levels of income or they consider that their income is suitable for their standard of living. On the contrary, they report lower levels of happiness when they live alone and when their nutrition is insufficient. We also find that education has no clear impact on happiness. We think that our study is a contribution to the study of those factors that can explain happiness among the elderly in Latin American countries. Future work will focus on enhanced empirical analysis and in extending our study to other countries.

  17. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Directory of Open Access Journals (Sweden)

    Sandvik Leiv

    2011-04-01

    Full Text Available Abstract Background The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Methods Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. Results The Welch U test (the T test with adjustment for unequal variances and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group. The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. Conclusions The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  18. Automated Test Methods for XML Metadata

    Science.gov (United States)

    2017-12-28

    8933 Com (661) 277 8933 email jon.morgan.2.ctr@us.af.mil Secretariat, Range Commanders Council ATTN: TEDT-WS-RCC 1510 Headquarters Avenue White...Sands Missile Range, New Mexico 88002-5110 Phone: DSN 258-1107 Com (575) 678-1107 Fax: DSN 258-7519 Com (575) 678-7519 email ...Method for Testing Syntax The test method is as follows. 1. Initialize the programming environment. 2. Write test application code to use the

  19. Calculating p-values and their significances with the Energy Test for large datasets

    Science.gov (United States)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  20. Contextual effects on the perceived health benefits of exercise: the exercise rank hypothesis.

    Science.gov (United States)

    Maltby, John; Wood, Alex M; Vlaev, Ivo; Taylor, Michael J; Brown, Gordon D A

    2012-12-01

    Many accounts of social influences on exercise participation describe how people compare their behaviors to those of others. We develop and test a novel hypothesis, the exercise rank hypothesis, of how this comparison can occur. The exercise rank hypothesis, derived from evolutionary theory and the decision by sampling model of judgment, suggests that individuals' perceptions of the health benefits of exercise are influenced by how individuals believe the amount of exercise ranks in comparison with other people's amounts of exercise. Study 1 demonstrated that individuals' perceptions of the health benefits of their own current exercise amounts were as predicted by the exercise rank hypothesis. Study 2 demonstrated that the perceptions of the health benefits of an amount of exercise can be manipulated by experimentally changing the ranked position of the amount within a comparison context. The discussion focuses on how social norm-based interventions could benefit from using rank information.

  1. Summary on Bayes estimation and hypothesis testing

    Directory of Open Access Journals (Sweden)

    D. J. de Waal

    1988-03-01

    Full Text Available Although Bayes’ theorem was published in 1764, it is only recently that Bayesian procedures were used in practice in statistical analyses. Many developments have taken place and are still taking place in the areas of decision theory and group decision making. Two aspects, namely that of estimation and tests of hypotheses, will be looked into. This is the area of statistical inference mainly concerned with Mathematical Statistics.

  2. Could changes in reported sex ratios at birth during China's 1958-1961 famine support the adaptive sex ratio adjustment hypothesis?

    Directory of Open Access Journals (Sweden)

    Anna Reimondos

    2013-10-01

    Full Text Available Background: The adaptive sex ratio adjustment hypothesis suggests that when mothers are in poor conditions the sex ratio of their offspring will be biased towards females. Major famines provide opportunities for testing this hypothesis because they lead to the widespread deterioration of living conditions in the affected population. Objective: This study examines changes in sex ratio at birth before, during, and after China's 1958-1961 famine, to see whether they provide any support for the adaptive sex ratio adjustment hypothesis. Methods: We use descriptive statistics to analyse data collected by both China's 1982 and 1988 fertility sample surveys and examine changes in sex ratio at birth in recent history. In addition, we examine the effectiveness of using different methods to model changes in sex ratio at birth and compare their differences. Results: During China's 1958-1961 famine, reported sex ratio at birth remained notably higher than that observed in most countries in the world. The timing of the decline in sex ratio at birth did not coincide with the timing of the famine. After the famine, although living conditions were considerably improved, the sex ratio at birth was not higher but lower than that recorded during the famine. Conclusions: The analysis of the data collected by the two fertility surveys has found no evidence that changes in sex ratio at birth during China's 1958-1961 famine and the post-famine period supported the adaptive sex ratio adjustment hypothesis.

  3. Microbial decomposition of keratin in nature—a new hypothesis of industrial relevance

    DEFF Research Database (Denmark)

    Lange, Lene; Huang, Yuhong; Kamp Busk, Peter

    2016-01-01

    with the keratinases to loosen the molecular structure, thus giving the enzymes access to their substrate, the protein structure. With such complexity, it is relevant to compare microbial keratin decomposition with the microbial decomposition of well-studied polymers such as cellulose and chitin. Interestingly...... enzymatic and boosting factors needed for keratin breakdown have been used to formulate a hypothesis for mode of action of the LPMOs in keratin decomposition and for a model for degradation of keratin in nature. Testing such hypotheses and models still needs to be done. Even now, the hypothesis can serve...

  4. A Teaching Method on Basic Chemistry for Freshman (II) : Teaching Method with Pre-test and Post-test

    OpenAIRE

    立木, 次郎; 武井, 庚二

    2004-01-01

    This report deals with review of a teaching method on basic chemistry for freshman in this first semester. We tried to review this teaching method with pre-test and post-test by means of the official and private questionnaires. Several hints and thoughts on teaching skills are obtained from this analysis.

  5. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  6. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    Science.gov (United States)

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Does cooperation increase helpers' later success as breeders? A test of the skills hypothesis in the cooperatively displaying lance-tailed manakin.

    Science.gov (United States)

    DuVal, Emily H

    2013-07-01

    Experience improves individual performance in many tasks. Pre-breeding cooperation may provide important experience that improves later success as a breeder, offering one compelling explanation for why some individuals delay reproduction to help others breed (the 'skills hypothesis'). However, confounding effects of age, quality and alternative selective benefits have complicated rigorous tests of this hypothesis. Male lance-tailed manakins perform cooperative courtship displays involving partnerships between unrelated alpha and beta males, and alphas monopolize resulting copulations. Beta males therefore do not receive immediate direct or indirect fitness benefits, but may gain skills during cooperation that increase their later success as an alpha. To date, however, the effect of cooperative experience on later success as a breeder has never been tested in any cooperatively displaying taxon. The effects of prior cooperative experience on reproductive success of alpha lance-tailed manakins were analysed in a mixed model framework using 12 years of information on cooperative experience and annual and lifetime genetic reproductive success for 57 alpha males. Models included previously identified effects of age and alpha tenure. Individual-level random effects controlled for quality differences to test for an independent influence of beta experience on success. Males accumulated up to 5 years of beta experience before becoming alphas, but 42·1% of alphas had no prior beta experience. Betas became alphas later in life, and experienced significantly lower reproductive success in their final year as alpha than males that were never beta, but did not have higher lifetime success or longer alpha tenures. Differences in patterns of annual siring success were best explained by age-dependent patterns of reproductive improvement and senescence among alphas, not beta experience. Cooperative experience does not increase relative breeding success for male lance-tailed manakins

  8. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Science.gov (United States)

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  9. Testing Weak-Form Efficiency of The Chinese Stock Market and Hong Kong Stock Market

    OpenAIRE

    Lei, Zhuolin

    2012-01-01

    This study examines the random walk hypothesis to determine the validity of weak-form efficiency for Shanghai, Shenzhen and Hong Kong Stock Exchanges. Daily returns from 2001-2010 for Shanghai A and B shares, Shenzhen A and B shares and Hong Kong Hang Seng Index are used in this study. The random walk hypothesis is examined by using four statistical methods, namely a serial correlation test, an Augmented Dickey-Fuller Unit Root test, a runs test and a variance ratio test. The empirical re...

  10. Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis.

    Science.gov (United States)

    Fischer, Agneta H; Kret, Mariska E; Broekens, Joost

    2018-01-01

    Previous meta-analyses and reviews on gender differences in emotion recognition have shown a small to moderate female advantage. However, inconsistent evidence from recent studies has raised questions regarding the implications of different methodologies, stimuli, and samples. In the present research based on a community sample of more than 5000 participants, we tested the emotional sensitivity hypothesis, stating that women are more sensitive to perceive subtle, i.e. low intense or ambiguous, emotion cues. In addition, we included a self-report emotional intelligence test in order to examine any discrepancy between self-perceptions and actual performance for both men and women. We used a wide range of stimuli and models, displaying six different emotions at two different intensity levels. In order to better tap sensitivity for subtle emotion cues, we did not use a forced choice format, but rather intensity measures of different emotions. We found no support for the emotional sensitivity account, as both genders rated the target emotions as similarly intense at both levels of stimulus intensity. Men, however, more strongly perceived non-target emotions to be present than women. In addition, we also found that the lower scores of men in self-reported EI was not related to their actual perception of target emotions, but it was to the perception of non-target emotions.

  11. Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis

    Science.gov (United States)

    Kret, Mariska E.; Broekens, Joost

    2018-01-01

    Previous meta-analyses and reviews on gender differences in emotion recognition have shown a small to moderate female advantage. However, inconsistent evidence from recent studies has raised questions regarding the implications of different methodologies, stimuli, and samples. In the present research based on a community sample of more than 5000 participants, we tested the emotional sensitivity hypothesis, stating that women are more sensitive to perceive subtle, i.e. low intense or ambiguous, emotion cues. In addition, we included a self-report emotional intelligence test in order to examine any discrepancy between self-perceptions and actual performance for both men and women. We used a wide range of stimuli and models, displaying six different emotions at two different intensity levels. In order to better tap sensitivity for subtle emotion cues, we did not use a forced choice format, but rather intensity measures of different emotions. We found no support for the emotional sensitivity account, as both genders rated the target emotions as similarly intense at both levels of stimulus intensity. Men, however, more strongly perceived non-target emotions to be present than women. In addition, we also found that the lower scores of men in self-reported EI was not related to their actual perception of target emotions, but it was to the perception of non-target emotions. PMID:29370198

  12. Standard test method for creep-fatigue crack growth testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of creep-fatigue crack growth properties of nominally homogeneous materials by use of pre-cracked compact type, C(T), test specimens subjected to uniaxial cyclic forces. It concerns fatigue cycling with sufficiently long loading/unloading rates or hold-times, or both, to cause creep deformation at the crack tip and the creep deformation be responsible for enhanced crack growth per loading cycle. It is intended as a guide for creep-fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. Therefore, this method requires testing of at least two specimens that yield overlapping crack growth rate data. The cyclic conditions responsible for creep-fatigue deformation and enhanced crack growth vary with material and with temperature for a given material. The effects of environment such as time-dependent oxidation in enhancing the crack growth ra...

  13. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    Energy Technology Data Exchange (ETDEWEB)

    Jha, Sumit Kumar [University of Central Florida, Orlando; Pullum, Laura L [ORNL; Ramanathan, Arvind [ORNL

    2016-01-01

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studying the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.

  14. Statistical inferences under the Null hypothesis: Common mistakes and pitfalls in neuroimaging studies.

    Directory of Open Access Journals (Sweden)

    Jean-Michel eHupé

    2015-02-01

    Full Text Available Published studies using functional and structural MRI include many errors in the way data are analyzed and conclusions reported. This was observed when working on a comprehensive review of the neural bases of synesthesia, but these errors are probably endemic to neuroimaging studies. All studies reviewed had based their conclusions using Null Hypothesis Significance Tests (NHST. NHST have yet been criticized since their inception because they are more appropriate for taking decisions related to a Null hypothesis (like in manufacturing than for making inferences about behavioral and neuronal processes. Here I focus on a few key problems of NHST related to brain imaging techniques, and explain why or when we should not rely on significance tests. I also observed that, often, the ill-posed logic of NHST was even not correctly applied, and describe what I identified as common mistakes or at least problematic practices in published papers, in light of what could be considered as the very basics of statistical inference. MRI statistics also involve much more complex issues than standard statistical inference. Analysis pipelines vary a lot between studies, even for those using the same software, and there is no consensus which pipeline is the best. I propose a synthetic view of the logic behind the possible methodological choices, and warn against the usage and interpretation of two statistical methods popular in brain imaging studies, the false discovery rate (FDR procedure and permutation tests. I suggest that current models for the analysis of brain imaging data suffer from serious limitations and call for a revision taking into account the new statistics (confidence intervals logic.

  15. Hypothesis-driven physical examination curriculum.

    Science.gov (United States)

    Allen, Sharon; Olson, Andrew; Menk, Jeremiah; Nixon, James

    2017-12-01

    Medical students traditionally learn physical examination skills as a rote list of manoeuvres. Alternatives like hypothesis-driven physical examination (HDPE) may promote students' understanding of the contribution of physical examination to diagnostic reasoning. We sought to determine whether first-year medical students can effectively learn to perform a physical examination using an HDPE approach, and then tailor the examination to specific clinical scenarios. Medical students traditionally learn physical examination skills as a rote list of manoeuvres CONTEXT: First-year medical students at the University of Minnesota were taught both traditional and HDPE approaches during a required 17-week clinical skills course in their first semester. The end-of-course evaluation assessed HDPE skills: students were assigned one of two cardiopulmonary cases. Each case included two diagnostic hypotheses. During an interaction with a standardised patient, students were asked to select physical examination manoeuvres in order to make a final diagnosis. Items were weighted and selection order was recorded. First-year students with minimal pathophysiology performed well. All students selected the correct diagnosis. Importantly, students varied the order when selecting examination manoeuvres depending on the diagnoses under consideration, demonstrating early clinical decision-making skills. An early introduction to HDPE may reinforce physical examination skills for hypothesis generation and testing, and can foster early clinical decision-making skills. This has important implications for further research in physical examination instruction. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  16. The Validation of NAA Method Used as Test Method in Serpong NAA Laboratory

    International Nuclear Information System (INIS)

    Rina-Mulyaningsih, Th.

    2004-01-01

    The Validation Of NAA Method Used As Test Method In Serpong NAA Laboratory. NAA Method is a non standard testing method. The testing laboratory shall validate its using method to ensure and confirm that it is suitable with application. The validation of NAA methods have been done with the parameters of accuracy, precision, repeatability and selectivity. The NIST 1573a Tomato Leaves, NIES 10C Rice flour unpolished and standard elements were used in this testing program. The result of testing with NIST 1573a showed that the elements of Na, Zn, Al and Mn are met from acceptance criteria of accuracy and precision, whereas Co is rejected. The result of testing with NIES 10C showed that Na and Zn elements are met from acceptance criteria of accuracy and precision, but Mn element is rejected. The result of selectivity test showed that the value of quantity is between 0.1-2.5 μg, depend on the elements. (author)

  17. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  18. Beyond hormones: a novel hypothesis for the biological basis of male sexual orientation.

    Science.gov (United States)

    Bocklandt, S; Hamer, D H

    2003-01-01

    For the past several decades, research on the development of human sexual orientation has focused on the role of pre- or peri-natal androgen levels on brain development. However, there is no evidence that physiologically occurring variations in androgen exposure influence differences in sexual orientation. In this review, we discuss an alternative hypothesis involving genomic imprinting in the regulation of sex specific expression of genes regulating sexually dimorphic traits, including sexual orientation. A possible experiment to test this hypothesis is discussed.

  19. A test of the functional avoidance hypothesis in the development of overgeneral autobiographical memory.

    Science.gov (United States)

    Hallford, D J; Austin, D W; Raes, F; Takano, K

    2018-04-18

    Overgeneral memory (OGM) refers to the failure to recall memories of specific personally experienced events, which occurs in various psychiatric disorders. One pathway through which OGM is theorized to develop is the avoidance of thinking of negative experiences, whereby cumulative avoidance may maladaptively generalize to autobiographical memory (AM) more broadly. We tested this, predicting that negative experiences would interact with avoidance to predict AM specificity. In Study 1 (N = 281), negative life events (over six months) and daily hassles (over one month) were not related to AM specificity, nor was avoidance, and no interaction was found. In Study 2 (N = 318), we revised our measurements and used an increased timeframe of 12 months for both negative life events and daily hassles. The results showed no interaction effect for negative life events, but they did show an interaction for daily hassles, whereby increased hassles and higher avoidance of thinking about them were associated with reduced AM specificity, independent of general cognitive avoidance and depressive symptoms. No evidence was found that cognitive avoidance or AM specificity moderated the effect of negative experiences on depressive symptoms. Our findings suggest that life events over 6-12 months are not associated with AM specificity, but chronic daily hassles over 12 months predict reduced AM specificity when individuals avoid thinking about them. The findings provide evidence for the functional-avoidance hypothesis of OGM development and future directions for longitudinal studies.

  20. Evaluation of Test Method for Solar Collector Efficiency

    DEFF Research Database (Denmark)

    Fan, Jianhua; Shah, Louise Jivan; Furbo, Simon

    The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximat...... and the sky temperature. Based on the investigations, recommendations for change of the test methods and test conditions are considered. The investigations are carried out within the NEGST (New Generation of Solar Thermal Systems) project financed by EU.......The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximated...... equation where Tin is the inlet temperature to the collector and Tout is the outlet temperature from the collector. The specific heat of the solar collector fluid is in the test method as an approximation determined as a constant equal to the specific heat of the solar collector fluid at the temperature Tm...

  1. Methods for Equating Mental Tests.

    Science.gov (United States)

    1984-11-01

    1983) compared conventional and IRT methods for equating the Test of English as a Foreign Language ( TOEFL ) after chaining. Three conventional and...three IRT equating methods were examined in this study; two sections of TOEFL were each (separately) equated. The IRT methods included the following: (a...group. A separate base form was established for each of the six equating methods. Instead of equating the base-form TOEFL to itself, the last (eighth

  2. Validity of Linder Hypothesis in Bric Countries

    Directory of Open Access Journals (Sweden)

    Rana Atabay

    2016-03-01

    Full Text Available In this study, the theory of similarity in preferences (Linder hypothesis has been introduced and trade in BRIC countries has been examined whether the trade between these countries was valid for this hypothesis. Using the data for the period 1996 – 2010, the study applies to panel data analysis in order to provide evidence regarding the empirical validity of the Linder hypothesis for BRIC countries’ international trade. Empirical findings show that the trade between BRIC countries is in support of Linder hypothesis.

  3. Thermal test requirements and their verification by different test methods

    International Nuclear Information System (INIS)

    Droste, B.; Wieser, G.; Probst, U.

    1993-01-01

    The paper discusses the parameters influencing the thermal test conditions for type B-packages. Criteria for different test methods (by analytical as well as by experimental means) will be developed. A comparison of experimental results from fuel oil pool and LPG fire tests will be given. (J.P.N.)

  4. Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.

    Science.gov (United States)

    Wertheim, E H; Habib, C; Cumming, G

    1986-04-01

    Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.

  5. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  6. Advanced Testing Method for Ground Thermal Conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaobing [ORNL; Clemenzi, Rick [Geothermal Design Center Inc.; Liu, Su [University of Tennessee (UT)

    2017-04-01

    A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce the cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.

  7. Attention, interpretation, and memory biases in subclinical depression: a proof-of-principle test of the combined cognitive biases hypothesis.

    Science.gov (United States)

    Everaert, Jonas; Duyck, Wouter; Koster, Ernst H W

    2014-04-01

    Emotional biases in attention, interpretation, and memory are viewed as important cognitive processes underlying symptoms of depression. To date, there is a limited understanding of the interplay among these processing biases. This study tested the dependence of memory on depression-related biases in attention and interpretation. Subclinically depressed and nondepressed participants completed a computerized version of the scrambled sentences test (measuring interpretation bias) while their eye movements were recorded (measuring attention bias). This task was followed by an incidental free recall test of previously constructed interpretations (measuring memory bias). Path analysis revealed a good fit for the model in which selective orienting of attention was associated with interpretation bias, which in turn was associated with a congruent bias in memory. Also, a good fit was observed for a path model in which biases in the maintenance of attention and interpretation were associated with memory bias. Both path models attained a superior fit compared with path models without the theorized functional relations among processing biases. These findings enhance understanding of how mechanisms of attention and interpretation regulate what is remembered. As such, they offer support for the combined cognitive biases hypothesis or the notion that emotionally biased cognitive processes are not isolated mechanisms but instead influence each other. Implications for theoretical models and emotion regulation across the spectrum of depressive symptoms are discussed.

  8. Alternative Test Method for Olefins in Gasoline

    Science.gov (United States)

    This action proposes to allow for an additional alternative test method for olefins in gasoline, ASTM D6550-05. The allowance of this additional alternative test method will provide more flexibility to the regulated industry.

  9. A Conclusive Test of Abelian Dominance Hypothesis for Topological Charge in the QCD Vacuum

    OpenAIRE

    Sasaki, Shoichi; Miyamura, Osamu

    1998-01-01

    We study the topological feature in the QCD vacuum based on the hypothesis of abelian dominance. The topological charge $Q_{\\rm SU(2)}$ can be explicitly represented in terms of the monopole current in the abelian dominated system. To appreciate its justification, we directly measure the corresponding topological charge $Q_{\\rm Mono}$, which is reconstructed only from the monopole current and the abelian component of gauge fields, by using the Monte Carlo simulation on SU(2) lattice. We find ...

  10. The bootstrap and Bayesian bootstrap method in assessing bioequivalence

    International Nuclear Information System (INIS)

    Wan Jianping; Zhang Kongsheng; Chen Hui

    2009-01-01

    Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.

  11. Einstein's Revolutionary Light-Quantum Hypothesis

    Science.gov (United States)

    Stuewer, Roger H.

    2005-05-01

    The paper in which Albert Einstein proposed his light-quantum hypothesis was the only one of his great papers of 1905 that he himself termed ``revolutionary.'' Contrary to widespread belief, Einstein did not propose his light-quantum hypothesis ``to explain the photoelectric effect.'' Instead, he based his argument for light quanta on the statistical interpretation of the second law of thermodynamics, with the photoelectric effect being only one of three phenomena that he offered as possible experimental support for it. I will discuss Einstein's light-quantum hypothesis of 1905 and his introduction of the wave-particle duality in 1909 and then turn to the reception of his work on light quanta by his contemporaries. We will examine the reasons that prominent physicists advanced to reject Einstein's light-quantum hypothesis in succeeding years. Those physicists included Robert A. Millikan, even though he provided convincing experimental proof of the validity of Einstein's equation of the photoelectric effect in 1915. The turning point came after Arthur Holly Compton discovered the Compton effect in late 1922, but even then Compton's discovery was contested both on experimental and on theoretical grounds. Niels Bohr, in particular, had never accepted the reality of light quanta and now, in 1924, proposed a theory, the Bohr-Kramers-Slater theory, which assumed that energy and momentum were conserved only statistically in microscopic interactions. Only after that theory was disproved experimentally in 1925 was Einstein's revolutionary light-quantum hypothesis generally accepted by physicists---a full two decades after Einstein had proposed it.

  12. A test of the "sexy son" hypothesis: sons of polygynous collared flycatchers do not inherit their fathers' mating status.

    Science.gov (United States)

    Gustafsson, Lars; Qvarnström, Anna

    2006-02-01

    According to the original "sexy son" hypothesis, a female may benefit from pairing with an already-mated male despite a reduction in fecundity because her sons inherit their father's attractiveness. We used data from a long-term study of collared flycatchers (Ficedula albicollis) collected during 24 years to test this prediction. Our results show that the sons of polygynously mated females fledged in poor condition and therefore did not inherit their father's large forehead patch (a condition-dependent display trait) or mating status. From the female's perspective, polygynous pairing resulted in fewer recruited grandchildren than did a monogamous pairing. The reproductive value of sons did not outweigh the fecundity costs of polygyny because the low paternal care reduced the attractiveness of sons. When there are long-lasting parental effects on offspring attractiveness, costs of polygyny may include the production of nonsexy sons.

  13. Testing the Australian Megatsunami Hypothesis

    Science.gov (United States)

    Courtney, Claire; Strotz, Luke; Chague-Goff, Catherine; Goff, James; Dominey-Howes, Dale

    2010-05-01

    In the wake of the 2004 Indian Ocean Tsunami, many countries have been forced to reassess the risk of tsunamis to their coasts. Australia, with relative tectonic stability, has previously been considered at low risk of tsunami inundation. Within written history, only small tsunamis have struck the Australian coast, causing little damage. However, a body of work has arisen that sheds doubt on this apparent low risk, with researchers suggesting that megatsunamis have affected the east Australian coast, in particular southern New South Wales. With proposed run-ups in excess of 100m, recurrence of such megatsunamis in the now densely populated New South Wales coastal region would be catastrophic. The disjunct between historical and geological records demands a thorough re-evaluation of New South Wales sites purported to contain evidence of megatsunamis. In addition, the unique set of diagnostic criteria previously used to identify Australian palaeotsunami deposits is distinctly different to criteria applied to paleotsunamis globally. To address these issues, four coastal lagoonal sites in southern New South Wales were identified for further investigation. In addition to paleotsunami investigation, these sites were selected to provide a geological record of significant events during the Holocene. Site selection was based on small accommodation space and a high preservation potential with back barrier depressions closed to the sea. A suite of diagnostic criteria developed over the past two decades to identify palaeotsunamis have been applied to cores extracted from these sites. Methods used include sedimentary description, grain size analysis, micropalaeontology, geochemistry and a variety of dating techniques such as radiocarbon and lead 210. Preliminary analysis of these results will be presented, with particular focus on sites where there is evidence that could indicate catastrophic saltwater inundation.

  14. Standard Test Method for Hot Spot Protection Testing of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method provides a procedure to determine the ability of a photovoltaic (PV) module to endure the long-term effects of periodic “hot spot” heating associated with common fault conditions such as severely cracked or mismatched cells, single-point open circuit failures (for example, interconnect failures), partial (or non-uniform) shadowing or soiling. Such effects typically include solder melting or deterioration of the encapsulation, but in severe cases could progress to combustion of the PV module and surrounding materials. 1.2 There are two ways that cells can cause a hot spot problem; either by having a high resistance so that there is a large resistance in the circuit, or by having a low resistance area (shunt) such that there is a high-current flow in a localized region. This test method selects cells of both types to be stressed. 1.3 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method....

  15. Comparison of testing methods for particulate filters

    International Nuclear Information System (INIS)

    Ullmann, W.; Przyborowski, S.

    1983-01-01

    Four testing methods for particulate filters were compared by using the test rigs of the National Board of Nuclear Safety and Radiation Protection: 1) Measurement of filter penetration P as a function of particle size d by using a polydisperse NaC1 test aerosol and a scintillation particle counter; 2) Modified sodium flame test for measurement of total filter penetration P for various polydisperse NaC1 test aerosols; 3) Measurement of total filter penetration P for a polydisperse NaC1 test aerosol labelled with short-lived radon daughter products; 4) Measurement of total filter penetration P for a special paraffin oil test aerosol (oil fog test used in FRG according DIN 24 184, test aerosol A). The investigations were carried out on sheets of glass fibre paper (five grades of paper). Detailed information about the four testing methods and the used particle size distributions is given. The different results of the various methods are the base for the discussion of the most important parameters which influence the filter penetration P. The course of the function P=f(d) shows the great influence of the particle size. As expected there was also found a great dependence both from the test aerosol as well as from the principle and the measuring range of the aerosol-measuring device. The differences between the results of the various test methods are greater the lower the penetration. The use of NaCl test aerosol with various particle size distributions gives great differences for the respective penetration values. On the basis of these results and the values given by Dorman conclusions are made about the investigation of particulate filters both for the determination of filter penetration P as well as for the leak test of installed filters

  16. Testing Kuznets’ Hypothesis for Russian Regions: Trends and Interpretations

    Directory of Open Access Journals (Sweden)

    James R. Alm

    2016-06-01

    Full Text Available The paper established a number of "stylized facts", one of which is a confirmation of the S. Kuznets’ hypothesis of the nonlinear dependence between the degree of inequality in income distribution and welfare economic systems on the example of a group of Russian regions for the period 2002–2012. It is shown that, for a given sample, the welfare and economic growth factors amplify their influence on inequality in income distribution in the post-crisis period. The monotonous growth of income inequality which was observed before the crisis of 2008 is slowing in the process of raising the per capita gross regional product (GRP during the post-crisis period, and for the foreseeable future, in some regions, its direction can be reversed, while maintaining a trend of socio-economic development. Despite the persistence over time of a convex nature of S. Kuznets’ curve for Russian regional data, its parameters changed during the reporting 2002–2012 period. The maximum point of the curve shifts to the left, its convexity increases. These facts indicate that the income inequality growth of the Russian regions’ as a result of growth of per capita GRP is slowing. For some regions in the post-crisis period, the income inequality does not grow with the growth of per capita GRP, or it even reduces. This fact can be attributed to the implementation of the Russian federal socially oriented projects and programs in recent years. The results can be used for the development of regional economic policy in order to regulate the level of income distribution inequality in the regions of Russia.

  17. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  18. Standard test method for tension testing of structural alloys in liquid helium

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method describes procedures for the tension testing of structural alloys in liquid helium. The format is similar to that of other ASTM tension test standards, but the contents include modifications for cryogenic testing which requires special apparatus, smaller specimens, and concern for serrated yielding, adiabatic heating, and strain-rate effects. 1.2 To conduct a tension test by this standard, the specimen in a cryostat is fully submerged in normal liquid helium (He I) and tested using crosshead displacement control at a nominal strain rate of 10−3 s−1 or less. Tests using force control or high strain rates are not considered. 1.3 This standard specifies methods for the measurement of yield strength, tensile strength, elongation, and reduction of area. The determination of the elastic modulus is treated in Test Method E 111. Note 1—The boiling point of normal liquid helium (He I) at sea level is 4.2 K (−269°C or −452.1°F or 7.6°R). It decreases with geographic elevation and is...

  19. Testing constancy of unconditional variance in volatility models by misspecification and specification tests

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Terasvirta, Timo

    The topic of this paper is testing the hypothesis of constant unconditional variance in GARCH models against the alternative that the unconditional variance changes deterministically over time. Tests of this hypothesis have previously been performed as misspecification tests after fitting a GARCH...... models. An application to exchange rate returns is included....

  20. Teen Fertility and Gender Inequality in Education: A Contextual Hypothesis

    Directory of Open Access Journals (Sweden)

    C. Shannon Stokes

    2004-12-01

    Full Text Available Previous studies in developed countries have found a micro-level association between teenage fertility and girls' educational attainment but researchers still debate the policy implications of these associations. First, are these associations causal? Second, are they substantively important enough, at the macro-level, to warrant policy attention? In other words, how much would policy efforts to reduce unintended pregnancy among teens pay off in terms of narrowing national gender gaps in educational attainment? Third, under what contexts are these payoffs likely to be important? This paper focuses on the latter two questions. We begin by proposing a contextual hypothesis to explain cross-national variation in the gender-equity payoffs from reducing unintended teen fertility. We then test this hypothesis, using DHS data from 38 countries.