WorldWideScience

Sample records for nonparametric linkage methods

  1. Nonparametric statistical methods

    CERN Document Server

    Hollander, Myles; Chicken, Eric

    2013-01-01

    Praise for the Second Edition"This book should be an essential part of the personal library of every practicing statistician."-Technometrics  Thoroughly revised and updated, the new edition of Nonparametric Statistical Methods includes additional modern topics and procedures, more practical data sets, and new problems from real-life situations. The book continues to emphasize the importance of nonparametric methods as a significant branch of modern statistics and equips readers with the conceptual and technical skills necessary to select and apply the appropriate procedures for any given sit

  2. Nonparametric statistical methods using R

    CERN Document Server

    Kloke, John

    2014-01-01

    A Practical Guide to Implementing Nonparametric and Rank-Based ProceduresNonparametric Statistical Methods Using R covers traditional nonparametric methods and rank-based analyses, including estimation and inference for models ranging from simple location models to general linear and nonlinear models for uncorrelated and correlated responses. The authors emphasize applications and statistical computation. They illustrate the methods with many real and simulated data examples using R, including the packages Rfit and npsm.The book first gives an overview of the R language and basic statistical c

  3. Astronomical Methods for Nonparametric Regression

    Science.gov (United States)

    Steinhardt, Charles L.; Jermyn, Adam

    2017-01-01

    I will discuss commonly used techniques for nonparametric regression in astronomy. We find that several of them, particularly running averages and running medians, are generically biased, asymmetric between dependent and independent variables, and perform poorly in recovering the underlying function, even when errors are present only in one variable. We then examine less-commonly used techniques such as Multivariate Adaptive Regressive Splines and Boosted Trees and find them superior in bias, asymmetry, and variance both theoretically and in practice under a wide range of numerical benchmarks. In this context the chief advantage of the common techniques is runtime, which even for large datasets is now measured in microseconds compared with milliseconds for the more statistically robust techniques. This points to a tradeoff between bias, variance, and computational resources which in recent years has shifted heavily in favor of the more advanced methods, primarily driven by Moore's Law. Along these lines, we also propose a new algorithm which has better overall statistical properties than all techniques examined thus far, at the cost of significantly worse runtime, in addition to providing guidance on choosing the nonparametric regression technique most suitable to any specific problem. We then examine the more general problem of errors in both variables and provide a new algorithm which performs well in most cases and lacks the clear asymmetry of existing non-parametric methods, which fail to account for errors in both variables.

  4. Why preferring parametric forecasting to nonparametric methods?

    Science.gov (United States)

    Jabot, Franck

    2015-05-07

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  6. Nonparametric methods in actigraphy: An update

    Directory of Open Access Journals (Sweden)

    Bruno S.B. Gonçalves

    2014-09-01

    Full Text Available Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm results for each time interval. Simulated data showed that (1 synchronization analysis depends on sample size, and (2 fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization.

  7. Nonparametric methods in actigraphy: An update

    Science.gov (United States)

    Gonçalves, Bruno S.B.; Cavalcanti, Paula R.A.; Tavares, Gracilene R.; Campos, Tania F.; Araujo, John F.

    2014-01-01

    Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization. PMID:26483921

  8. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb-Douglas and......We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs...... rejects both the Cobb-Douglas and the Translog functional form, while a recently developed nonparametric kernel regression method with a fully nonparametric panel data specification delivers plausible results. On average, the nonparametric regression results are similar to results that are obtained from...

  9. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  10. Non-parametric versus parametric methods in environmental sciences

    Directory of Open Access Journals (Sweden)

    Muhammad Riaz

    2016-01-01

    Full Text Available This current report intends to highlight the importance of considering background assumptions required for the analysis of real datasets in different disciplines. We will provide comparative discussion of parametric methods (that depends on distributional assumptions (like normality relative to non-parametric methods (that are free from many distributional assumptions. We have chosen a real dataset from environmental sciences (one of the application areas. The findings may be extended to the other disciplines following the same spirit.

  11. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  12. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    -Douglas function nor the Translog function are consistent with the “true” relationship between the inputs and the output in our data set. We solve this problem by using non-parametric regression. This approach delivers reasonable results, which are on average not too different from the results of the parametric......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify the functional form of the production function. Most often, the Cobb...... results—including measures that are of interest of applied economists, such as elasticities. Therefore, we propose to use nonparametric econometric methods. First, they can be applied to verify the functional form used in parametric estimations of production functions. Second, they can be directly used...

  13. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...

  14. A Bayesian nonparametric method for prediction in EST analysis

    Directory of Open Access Journals (Sweden)

    Prünster Igor

    2007-09-01

    Full Text Available Abstract Background Expressed sequence tags (ESTs analyses are a fundamental tool for gene identification in organisms. Given a preliminary EST sample from a certain library, several statistical prediction problems arise. In particular, it is of interest to estimate how many new genes can be detected in a future EST sample of given size and also to determine the gene discovery rate: these estimates represent the basis for deciding whether to proceed sequencing the library and, in case of a positive decision, a guideline for selecting the size of the new sample. Such information is also useful for establishing sequencing efficiency in experimental design and for measuring the degree of redundancy of an EST library. Results In this work we propose a Bayesian nonparametric approach for tackling statistical problems related to EST surveys. In particular, we provide estimates for: a the coverage, defined as the proportion of unique genes in the library represented in the given sample of reads; b the number of new unique genes to be observed in a future sample; c the discovery rate of new genes as a function of the future sample size. The Bayesian nonparametric model we adopt conveys, in a statistically rigorous way, the available information into prediction. Our proposal has appealing properties over frequentist nonparametric methods, which become unstable when prediction is required for large future samples. EST libraries, previously studied with frequentist methods, are analyzed in detail. Conclusion The Bayesian nonparametric approach we undertake yields valuable tools for gene capture and prediction in EST libraries. The estimators we obtain do not feature the kind of drawbacks associated with frequentist estimators and are reliable for any size of the additional sample.

  15. Digital spectral analysis parametric, non-parametric and advanced methods

    CERN Document Server

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  16. Parametric and nonparametric Granger causality testing: Linkages between international stock markets

    NARCIS (Netherlands)

    de Gooijer, J.G.; Sivarajasingham, S.

    2008-01-01

    This study investigates long-term linear and nonlinear causal linkages among eleven stock markets, six industrialized markets and five emerging markets of South-East Asia. We cover the period 1987-2006, taking into account the on-set of the Asian financial crisis of 1997. We first apply a test for t

  17. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb-Douglas a......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...... to estimate production functions without the specification of a functional form. Therefore, they avoid possible misspecification errors due to the use of an unsuitable functional form. In this paper, we use parametric and non-parametric methods to identify the optimal size of Polish crop farms...

  18. Computing Economies of Scope Using Robust Partial Frontier Nonparametric Methods

    Directory of Open Access Journals (Sweden)

    Pedro Carvalho

    2016-03-01

    Full Text Available This paper proposes a methodology to examine economies of scope using the recent order-α nonparametric method. It allows us to investigate economies of scope by comparing the efficient order-α frontiers of firms that produce two or more goods with the efficient order-α frontiers of firms that produce only one good. To accomplish this, and because the order-α frontiers are irregular, we suggest to linearize them by the DEA estimator. The proposed methodology uses partial frontier nonparametric methods that are more robust than the traditional full frontier methods. By using a sample of 67 Portuguese water utilities for the period 2002–2008 and, also, a simulated sample, we prove the usefulness of the approach adopted and show that if only the full frontier methods were used, they would lead to different results. We found evidence of economies of scope in the provision of water supply and wastewater services simultaneously by water utilities in Portugal.

  19. Power of non-parametric linkage analysis in mapping genes contributing to human longevity in long-lived sib-pairs

    DEFF Research Database (Denmark)

    Tan, Qihua; Zhao, J H; Iachine, I

    2004-01-01

    This report investigates the power issue in applying the non-parametric linkage analysis of affected sib-pairs (ASP) [Kruglyak and Lander, 1995: Am J Hum Genet 57:439-454] to localize genes that contribute to human longevity using long-lived sib-pairs. Data were simulated by introducing a recently...... developed statistical model for measuring marker-longevity associations [Yashin et al., 1999: Am J Hum Genet 65:1178-1193], enabling direct power comparison between linkage and association approaches. The non-parametric linkage (NPL) scores estimated in the region harboring the causal allele are evaluated...... in case of a dominant effect. Although the power issue may depend heavily on the true genetic nature in maintaining survival, our study suggests that results from small-scale sib-pair investigations should be referred with caution, given the complexity of human longevity....

  20. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify the functional form of the production function. Most often, the Cobb......-Douglas or the Translog production function is used. However, the specification of a functional form for the production function involves the risk of specifying a functional form that is not similar to the “true” relationship between the inputs and the output. This misspecification might result in biased estimation...... results—including measures that are of interest of applied economists, such as elasticities. Therefore, we propose to use nonparametric econometric methods. First, they can be applied to verify the functional form used in parametric estimations of production functions. Second, they can be directly used...

  1. A Unified Discussion on the Concept of Score Functions Used in the Context of Nonparametric Linkage Analysis

    Directory of Open Access Journals (Sweden)

    Lars Ängquist

    2008-01-01

    Full Text Available In this article we try to discuss nonparametric linkage (NPL score functions within a broad and quite general framework. The main focus of the paper is the structure, derivation principles and interpretations of the score function entity itself. We define and discuss several families of one-locus score function definitions, i.e. the implicit, explicit and optimal ones. Some generalizations and comments to the two-locus, unconditional and conditional, cases are included as well. Although this article mainly aims at serving as an overview, where the concept of score functions are put into a covering context, we generalize the noncentrality parameter (NCP optimal score functions in Ängquist et al. (2007 to facilitate—through weighting—for incorporation of several plausible distinct genetic models. Since the genetic model itself most oftenly is to some extent unknown this facilitates weaker prior assumptions with respect to plausible true disease models without loosing the property of NCP-optimality. Moreover, we discuss general assumptions and properties of score functions in the above sense. For instance, the concept of identical by descent (IBD sharing structures and score function equivalence are discussed in some detail.

  2. A robust nonparametric method for quantifying undetected extinctions.

    Science.gov (United States)

    Chisholm, Ryan A; Giam, Xingli; Sadanandan, Keren R; Fung, Tak; Rheindt, Frank E

    2016-06-01

    How many species have gone extinct in modern times before being described by science? To answer this question, and thereby get a full assessment of humanity's impact on biodiversity, statistical methods that quantify undetected extinctions are required. Such methods have been developed recently, but they are limited by their reliance on parametric assumptions; specifically, they assume the pools of extant and undetected species decay exponentially, whereas real detection rates vary temporally with survey effort and real extinction rates vary with the waxing and waning of threatening processes. We devised a new, nonparametric method for estimating undetected extinctions. As inputs, the method requires only the first and last date at which each species in an ensemble was recorded. As outputs, the method provides estimates of the proportion of species that have gone extinct, detected, or undetected and, in the special case where the number of undetected extant species in the present day is assumed close to zero, of the absolute number of undetected extinct species. The main assumption of the method is that the per-species extinction rate is independent of whether a species has been detected or not. We applied the method to the resident native bird fauna of Singapore. Of 195 recorded species, 58 (29.7%) have gone extinct in the last 200 years. Our method projected that an additional 9.6 species (95% CI 3.4, 19.8) have gone extinct without first being recorded, implying a true extinction rate of 33.0% (95% CI 31.0%, 36.2%). We provide R code for implementing our method. Because our method does not depend on strong assumptions, we expect it to be broadly useful for quantifying undetected extinctions. © 2016 Society for Conservation Biology.

  3. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    Science.gov (United States)

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  4. Non-parametric and least squares Langley plot methods

    Directory of Open Access Journals (Sweden)

    P. W. Kiedron

    2015-04-01

    Full Text Available Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer–Lambert–Beer law V=V>/i>0e−τ ·m, where a plot of ln (V voltage vs. m air mass yields a straight line with intercept ln (V0. This ln (V0 subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0's are smoothed and interpolated with median and mean moving window filters.

  5. Some methods for blindfolded record linkage

    Directory of Open Access Journals (Sweden)

    Christen Peter

    2004-06-01

    Full Text Available Abstract Background The linkage of records which refer to the same entity in separate data collections is a common requirement in public health and biomedical research. Traditionally, record linkage techniques have required that all the identifying data in which links are sought be revealed to at least one party, often a third party. This necessarily invades personal privacy and requires complete trust in the intentions of that party and their ability to maintain security and confidentiality. Dusserre, Quantin, Bouzelat and colleagues have demonstrated that it is possible to use secure one-way hash transformations to carry out follow-up epidemiological studies without any party having to reveal identifying information about any of the subjects – a technique which we refer to as "blindfolded record linkage". A limitation of their method is that only exact comparisons of values are possible, although phonetic encoding of names and other strings can be used to allow for some types of typographical variation and data errors. Methods A method is described which permits the calculation of a general similarity measure, the n-gram score, without having to reveal the data being compared, albeit at some cost in computation and data communication. This method can be combined with public key cryptography and automatic estimation of linkage model parameters to create an overall system for blindfolded record linkage. Results The system described offers good protection against misdeeds or security failures by any one party, but remains vulnerable to collusion between or simultaneous compromise of two or more parties involved in the linkage operation. In order to reduce the likelihood of this, the use of last-minute allocation of tasks to substitutable servers is proposed. Proof-of-concept computer programmes written in the Python programming language are provided to illustrate the similarity comparison protocol. Conclusion Although the protocols described in

  6. Methods for genetic linkage analysis using trisomies

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, E. [Emory Univ. School of Public Health, Atlanta, GA (United States); Lamb, N.E.; Sherman, S.L. [Emory Univ., Atlanta, GA (United States)

    1995-02-01

    Certain genetic disorders are rare in the general population, but more common in individuals with specific trisomies. Examples of this include leukemia and duodenal atresia in trisomy 21. This paper presents a linkage analysis method for using trisomic individuals to map genes for such traits. It is based on a very general gene-specific dosage model that posits that the trait is caused by specific effects of different alleles at one or a few loci and that duplicate copies of {open_quotes}susceptibility{close_quotes} alleles inherited from the nondisjoining parent give increased likelihood of having the trait. Our mapping method is similar to identity-by-descent-based mapping methods using affected relative pairs and also to methods for mapping recessive traits using inbred individuals by looking for markers with greater than expected homozygosity by descent. In the trisomy case, one would take trisomic individuals and look for markers with greater than expected homozygosity in the chromosomes inherited from the nondisjoining parent. We present statistical methods for performing such a linkage analysis, including a test for linkage to a marker, a method for estimating the distance from the marker to the trait gene, a confidence interval for that distance, and methods for computing power and sample sizes. We also resolve some practical issues involved in implementing the methods, including how to use partially informative markers and how to test candidate genes. 20 refs., 5 figs., 1 tab.

  7. Model-based methods for linkage analysis.

    Science.gov (United States)

    Rice, John P; Saccone, Nancy L; Corbett, Jonathan

    2008-01-01

    The logarithm of an odds ratio (LOD) score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential so that pedigrees or LOD curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders where the maximum LOD score statistic shares some of the advantages of the traditional LOD score approach, but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the LOD score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  8. Methods for genetic linkage analysis using trisomies

    Energy Technology Data Exchange (ETDEWEB)

    Feingold, E.; Lamb, N.E.; Sherman, S.L. [Emory Univ., Atlanta, GA (United States)

    1994-09-01

    Certain genetic disorders (e.g. congenital cataracts, duodenal atresia) are rare in the general population, but more common in people with Down`s syndrome. We present a method for using individuals with trisomy 21 to map genes for such traits. Our methods are analogous to methods for mapping autosomal dominant traits using affected relative pairs by looking for markers with greater than expected identity-by-descent. In the trisomy case, one would take trisomic individuals and look for markers with greater than expected reduction to homozygosity in the chromosomes inherited form the non-disjoining parent. We present statistical methods for performing such a linkage analysis, including a test for linkage to a marker, a method for estimating the distance from the marker to the gene, a confidence interval for that distance, and methods for computing power and sample sizes. The methods are described in the context of gene-dosage model for the etiology of the disorder, but can be extended to other models. We also resolve some practical issues involved in implementing the methods, including how to use partially informative markers, how to test candidate genes, and how to handle the effect of reduced recombination associated with maternal meiosis I non-disjunction.

  9. Yield Stability of Maize Hybrids Evaluated in Maize Regional Trials in Southwestern China Using Nonparametric Methods

    Institute of Scientific and Technical Information of China (English)

    LIU Yong-jian; DUAN Chuan; TIAN Meng-liang; HU Er-liang; HUANG Yu-bi

    2010-01-01

    Analysis of multi-environment trials (METs) of crops for the evaluation and recommendation of varieties is an important issue in plant breeding research. Evaluating on the both stability of performance and high yield is essential in MET analyses. The objective of the present investigation was to compare 11 nonparametric stability statistics and apply nonparametric tests for genotype-by-environment interaction (GEI) to 14 maize (Zea mays L.) genotypes grown at 25 locations in southwestern China during 2005. Results of nonparametric tests of GEI and a combined ANOVA across locations showed that both crossover and noncrossover GEI, and genotypes varied highly significantly for yield. The results of principal component analysis, correlation analysis of nonparametric statistics, and yield indicated the nonparametric statistics grouped as four distinct classes that corresponded to different agronomic and biological concepts of stability.Furthermore, high values of TOP and low values of rank-sum were associated with high mean yield, but the other nonparametric statistics were not positively correlated with mean yield. Therefore, only rank-sum and TOP methods would be useful for simultaneously selection for high yield and stability. These two statistics recommended JY686 and HX 168 as desirable and ND 108, CM 12, CN36, and NK6661 as undesirable genotypes.

  10. Linkages among U.S. Treasury Bond Yields, Commodity Futures and Stock Market Implied Volatility: New Nonparametric Evidence

    Directory of Open Access Journals (Sweden)

    Vychytilova Jana

    2015-09-01

    Full Text Available This paper aims to explore specific cross-asset market correlations over the past fifteen- yearperiod-from January 04, 1999 till April 01, 2015, and within four sub-phases covering both the crisis and the non-crisis periods. On the basis of multivariate statistical methods, we focus on investigating relations between selected well-known market indices- U.S. treasury bond yields- the 30-year treasury yield index (TYX and the 10-year treasury yield (TNX; commodity futures the TR/J CRB; and implied volatility of S&P 500 index- the VIX. We estimate relative logarithmic returns by using monthly close prices adjusted for dividends and splits and run normality and correlation analyses. This paper indicates that the TR/J CRB can be adequately modeled by a normal distribution, whereas the rest of benchmarks do not come from a normal distribution. This paper, inter alia, points out some evidence of a statistically significant negative relationship between bond yields and the VIX in the past fifteen years and a statistically significant negative linkage between the TR/J CRB and the VIX since 2009. In rather general terms, this paper thereafter supports the a priori idea- financial markets are interconnected. Such knowledge can be beneficial for building and testing accurate financial market models, and particularly for the understanding and recognizing market cycles.

  11. Modern nonparametric, robust and multivariate methods festschrift in honour of Hannu Oja

    CERN Document Server

    Taskinen, Sara

    2015-01-01

    Written by leading experts in the field, this edited volume brings together the latest findings in the area of nonparametric, robust and multivariate statistical methods. The individual contributions cover a wide variety of topics ranging from univariate nonparametric methods to robust methods for complex data structures. Some examples from statistical signal processing are also given. The volume is dedicated to Hannu Oja on the occasion of his 65th birthday and is intended for researchers as well as PhD students with a good knowledge of statistics.

  12. Comparison of reliability techniques of parametric and non-parametric method

    Directory of Open Access Journals (Sweden)

    C. Kalaiselvan

    2016-06-01

    Full Text Available Reliability of a product or system is the probability that the product performs adequately its intended function for the stated period of time under stated operating conditions. It is function of time. The most widely used nano ceramic capacitor C0G and X7R is used in this reliability study to generate the Time-to failure (TTF data. The time to failure data are identified by Accelerated Life Test (ALT and Highly Accelerated Life Testing (HALT. The test is conducted at high stress level to generate more failure rate within the short interval of time. The reliability method used to convert accelerated to actual condition is Parametric method and Non-Parametric method. In this paper, comparative study has been done for Parametric and Non-Parametric methods to identify the failure data. The Weibull distribution is identified for parametric method; Kaplan–Meier and Simple Actuarial Method are identified for non-parametric method. The time taken to identify the mean time to failure (MTTF in accelerating condition is the same for parametric and non-parametric method with relative deviation.

  13. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt;

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  14. Nonparametric methods for drought severity estimation at ungauged sites

    Science.gov (United States)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  15. A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale.

    Science.gov (United States)

    Mircioiu, Constantin; Atkinson, Jeffrey

    2017-05-10

    A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple "yes" or "no" but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with "large" (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 "very important" and 4 "essential"), a "score analysis" based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses.

  16. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...... to avoid this problem. The main objective is to investigate the applicability of the nonparametric kernel regression method in applied production analysis. The focus of the empirical analyses included in this thesis is the agricultural sector in Poland. Data on Polish farms are used to investigate...... practically and politically relevant problems and to illustrate how nonparametric regression methods can be used in applied microeconomic production analysis both in panel data and cross-section data settings. The thesis consists of four papers. The first paper addresses problems of parametric...

  17. Nonparametric Kernel Smoothing Methods. The sm library in Xlisp-Stat

    Directory of Open Access Journals (Sweden)

    Luca Scrucca

    2001-06-01

    Full Text Available In this paper we describe the Xlisp-Stat version of the sm library, a software for applying nonparametric kernel smoothing methods. The original version of the sm library was written by Bowman and Azzalini in S-Plus, and it is documented in their book Applied Smoothing Techniques for Data Analysis (1997. This is also the main reference for a complete description of the statistical methods implemented. The sm library provides kernel smoothing methods for obtaining nonparametric estimates of density functions and regression curves for different data structures. Smoothing techniques may be employed as a descriptive graphical tool for exploratory data analysis. Furthermore, they can also serve for inferential purposes as, for instance, when a nonparametric estimate is used for checking a proposed parametric model. The Xlisp-Stat version includes some extensions to the original sm library, mainly in the area of local likelihood estimation for generalized linear models. The Xlisp-Stat version of the sm library has been written following an object-oriented approach. This should allow experienced Xlisp-Stat users to implement easily their own methods and new research ideas into the built-in prototypes.

  18. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...... function. However, the a priori specification of a functional form involves the risk of choosing one that is not similar to the “true” but unknown relationship between the regressors and the dependent variable. This problem, known as parametric misspecification, can result in biased parameter estimates...... and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...

  19. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method.

    Science.gov (United States)

    López Fontán, J L; Costa, J; Ruso, J M; Prieto, G; Sarmiento, F

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found.

  20. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)

  1. Non-parametric change-point method for differential gene expression detection.

    Directory of Open Access Journals (Sweden)

    Yao Wang

    Full Text Available BACKGROUND: We proposed a non-parametric method, named Non-Parametric Change Point Statistic (NPCPS for short, by using a single equation for detecting differential gene expression (DGE in microarray data. NPCPS is based on the change point theory to provide effective DGE detecting ability. METHODOLOGY: NPCPS used the data distribution of the normal samples as input, and detects DGE in the cancer samples by locating the change point of gene expression profile. An estimate of the change point position generated by NPCPS enables the identification of the samples containing DGE. Monte Carlo simulation and ROC study were applied to examine the detecting accuracy of NPCPS, and the experiment on real microarray data of breast cancer was carried out to compare NPCPS with other methods. CONCLUSIONS: Simulation study indicated that NPCPS was more effective for detecting DGE in cancer subset compared with five parametric methods and one non-parametric method. When there were more than 8 cancer samples containing DGE, the type I error of NPCPS was below 0.01. Experiment results showed both good accuracy and reliability of NPCPS. Out of the 30 top genes ranked by using NPCPS, 16 genes were reported as relevant to cancer. Correlations between the detecting result of NPCPS and the compared methods were less than 0.05, while between the other methods the values were from 0.20 to 0.84. This indicates that NPCPS is working on different features and thus provides DGE identification from a distinct perspective comparing with the other mean or median based methods.

  2. An adaptive nonparametric method in benchmark analysis for bioassay and environmental studies.

    Science.gov (United States)

    Bhattacharya, Rabi; Lin, Lizhen

    2010-12-01

    We present a novel nonparametric method for bioassay and benchmark analysis in risk assessment, which averages isotonic MLEs based on disjoint subgroups of dosages. The asymptotic theory for the methodology is derived, showing that the MISEs (mean integrated squared error) of the estimates of both the dose-response curve F and its inverse F(-1) achieve the optimal rate O(N(-4/5)). Also, we compute the asymptotic distribution of the estimate ζ~p of the effective dosage ζ(p) = F(-1) (p) which is shown to have an optimally small asymptotic variance.

  3. A nonparametric statistical method for image segmentation using information theory and curve evolution.

    Science.gov (United States)

    Kim, Junmo; Fisher, John W; Yezzi, Anthony; Cetin, Müjdat; Willsky, Alan S

    2005-10-01

    In this paper, we present a new information-theoretic approach to image segmentation. We cast the segmentation problem as the maximization of the mutual information between the region labels and the image pixel intensities, subject to a constraint on the total length of the region boundaries. We assume that the probability densities associated with the image pixel intensities within each region are completely unknown a priori, and we formulate the problem based on nonparametric density estimates. Due to the nonparametric structure, our method does not require the image regions to have a particular type of probability distribution and does not require the extraction and use of a particular statistic. We solve the information-theoretic optimization problem by deriving the associated gradient flows and applying curve evolution techniques. We use level-set methods to implement the resulting evolution. The experimental results based on both synthetic and real images demonstrate that the proposed technique can solve a variety of challenging image segmentation problems. Futhermore, our method, which does not require any training, performs as good as methods based on training.

  4. A web application for evaluating Phase I methods using a non-parametric optimal benchmark.

    Science.gov (United States)

    Wages, Nolan A; Varhegyi, Nikole

    2017-06-01

    In evaluating the performance of Phase I dose-finding designs, simulation studies are typically conducted to assess how often a method correctly selects the true maximum tolerated dose under a set of assumed dose-toxicity curves. A necessary component of the evaluation process is to have some concept for how well a design can possibly perform. The notion of an upper bound on the accuracy of maximum tolerated dose selection is often omitted from the simulation study, and the aim of this work is to provide researchers with accessible software to quickly evaluate the operating characteristics of Phase I methods using a benchmark. The non-parametric optimal benchmark is a useful theoretical tool for simulations that can serve as an upper limit for the accuracy of maximum tolerated dose identification based on a binary toxicity endpoint. It offers researchers a sense of the plausibility of a Phase I method's operating characteristics in simulation. We have developed an R shiny web application for simulating the benchmark. The web application has the ability to quickly provide simulation results for the benchmark and requires no programming knowledge. The application is free to access and use on any device with an Internet browser. The application provides the percentage of correct selection of the maximum tolerated dose and an accuracy index, operating characteristics typically used in evaluating the accuracy of dose-finding designs. We hope this software will facilitate the use of the non-parametric optimal benchmark as an evaluation tool in dose-finding simulation.

  5. Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model

    Directory of Open Access Journals (Sweden)

    Isaac Mugume

    2016-01-01

    Full Text Available Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE, the mean absolute error (MAE, mean error (ME, skewness, and the bias easy estimate (BES and nonparametric (the sign test, STM methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.

  6. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Thinggaard, Mikael; Engholm, Gerda

    2016-01-01

    Background Histograms are a common tool to estimate densities non-parametrically. They are extensively encountered in health sciences to summarize data in a compact format. Examples are age-specific distributions of death or onset of diseases grouped in 5-years age classes with an open-ended age...... methods for ungrouping count data. We compare the performance of two spline interpolation methods, two kernel density estimators and a penalized composite link model first via a simulation study and then with empirical data obtained from the NORDCAN Database. All methods analyzed can be used to estimate...... composite link model performs the best. Conclusion We give an overview and test different methods to estimate detailed distributions from grouped count data. Health researchers can benefit from these versatile methods, which are ready for use in the statistical software R. We recommend using the penalized...

  7. A NEW DE-NOISING METHOD BASED ON 3-BAND WAVELET AND NONPARAMETRIC ADAPTIVE ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Li Li; Peng Yuhua; Yang Mingqiang; Xue Peijun

    2007-01-01

    Wavelet de-noising has been well known as an important method of signal de-noising.Recently,most of the research efforts about wavelet de-noising focus on how to select the threshold,where Donoho method is applied widely.Compared with traditional 2-band wavelet,3-band wavelet has advantages in many aspects.According to this theory,an adaptive signal de-noising method in 3-band wavelet domain based on nonparametric adaptive estimation is proposed.The experimental results show that in 3-band wavelet domain,the proposed method represents better characteristics than Donoho method in protecting detail and improving the signal-to-noise ratio of reconstruction signal.

  8. Hadron energy reconstruction for the ATLAS calorimetry in the framework of the nonparametrical method

    CERN Document Server

    Akhmadaliev, S Z; Ambrosini, G; Amorim, A; Anderson, K; Andrieux, M L; Aubert, Bernard; Augé, E; Badaud, F; Baisin, L; Barreiro, F; Battistoni, G; Bazan, A; Bazizi, K; Belymam, A; Benchekroun, D; Berglund, S R; Berset, J C; Blanchot, G; Bogush, A A; Bohm, C; Boldea, V; Bonivento, W; Bosman, M; Bouhemaid, N; Breton, D; Brette, P; Bromberg, C; Budagov, Yu A; Burdin, S V; Calôba, L P; Camarena, F; Camin, D V; Canton, B; Caprini, M; Carvalho, J; Casado, M P; Castillo, M V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Chadelas, R; Chalifour, M; Chekhtman, A; Chevalley, J L; Chirikov-Zorin, I E; Chlachidze, G; Citterio, M; Cleland, W E; Clément, C; Cobal, M; Cogswell, F; Colas, Jacques; Collot, J; Cologna, S; Constantinescu, S; Costa, G; Costanzo, D; Crouau, M; Daudon, F; David, J; David, M; Davidek, T; Dawson, J; De, K; de La Taille, C; Del Peso, J; Del Prete, T; de Saintignon, P; Di Girolamo, B; Dinkespiler, B; Dita, S; Dodd, J; Dolejsi, J; Dolezal, Z; Downing, R; Dugne, J J; Dzahini, D; Efthymiopoulos, I; Errede, D; Errede, S; Evans, H; Eynard, G; Fassi, F; Fassnacht, P; Ferrari, A; Ferrer, A; Flaminio, Vincenzo; Fournier, D; Fumagalli, G; Gallas, E; Gaspar, M; Giakoumopoulou, V; Gianotti, F; Gildemeister, O; Giokaris, N; Glagolev, V; Glebov, V Yu; Gomes, A; González, V; González de la Hoz, S; Grabskii, V; Graugès-Pous, E; Grenier, P; Hakopian, H H; Haney, M; Hébrard, C; Henriques, A; Hervás, L; Higón, E; Holmgren, Sven Olof; Hostachy, J Y; Hoummada, A; Huston, J; Imbault, D; Ivanyushenkov, Yu M; Jézéquel, S; Johansson, E K; Jon-And, K; Jones, R; Juste, A; Kakurin, S; Karyukhin, A N; Khokhlov, Yu A; Khubua, J I; Klioukhine, V I; Kolachev, G M; Kopikov, S V; Kostrikov, M E; Kozlov, V; Krivkova, P; Kukhtin, V V; Kulagin, M; Kulchitskii, Yu A; Kuzmin, M V; Labarga, L; Laborie, G; Lacour, D; Laforge, B; Lami, S; Lapin, V; Le Dortz, O; Lefebvre, M; Le Flour, T; Leitner, R; Leltchouk, M; Li, J; Liablin, M V; Linossier, O; Lissauer, D; Lobkowicz, F; Lokajícek, M; Lomakin, Yu F; López-Amengual, J M; Lund-Jensen, B; Maio, A; Makowiecki, D S; Malyukov, S N; Mandelli, L; Mansoulié, B; Mapelli, Livio P; Marin, C P; Marrocchesi, P S; Marroquim, F; Martin, P; Maslennikov, A L; Massol, N; Mataix, L; Mazzanti, M; Mazzoni, E; Merritt, F S; Michel, B; Miller, R; Minashvili, I A; Miralles, L; Mnatzakanian, E A; Monnier, E; Montarou, G; Mornacchi, Giuseppe; Moynot, M; Muanza, G S; Nayman, P; Némécek, S; Nessi, Marzio; Nicoleau, S; Niculescu, M; Noppe, J M; Onofre, A; Pallin, D; Pantea, D; Paoletti, R; Park, I C; Parrour, G; Parsons, J; Pereira, A; Perini, L; Perlas, J A; Perrodo, P; Pilcher, J E; Pinhão, J; Plothow-Besch, Hartmute; Poggioli, Luc; Poirot, S; Price, L; Protopopov, Yu; Proudfoot, J; Puzo, P; Radeka, V; Rahm, David Charles; Reinmuth, G; Renzoni, G; Rescia, S; Resconi, S; Richards, R; Richer, J P; Roda, C; Rodier, S; Roldán, J; Romance, J B; Romanov, V; Romero, P; Rossel, F; Rusakovitch, N A; Sala, P; Sanchis, E; Sanders, H; Santoni, C; Santos, J; Sauvage, D; Sauvage, G; Sawyer, L; Says, L P; Schaffer, A C; Schwemling, P; Schwindling, J; Seguin-Moreau, N; Seidl, W; Seixas, J M; Selldén, B; Seman, M; Semenov, A; Serin, L; Shaldaev, E; Shochet, M J; Sidorov, V; Silva, J; Simaitis, V J; Simion, S; Sissakian, A N; Snopkov, R; Söderqvist, J; Solodkov, A A; Soloviev, A; Soloviev, I V; Sonderegger, P; Soustruznik, K; Spanó, F; Spiwoks, R; Stanek, R; Starchenko, E A; Stavina, P; Stephens, R; Suk, M; Surkov, A; Sykora, I; Takai, H; Tang, F; Tardell, S; Tartarelli, F; Tas, P; Teiger, J; Thaler, J; Thion, J; Tikhonov, Yu A; Tisserant, S; Tokar, S; Topilin, N D; Trka, Z; Turcotte, M; Valkár, S; Varanda, M J; Vartapetian, A H; Vazeille, F; Vichou, I; Vinogradov, V; Vorozhtsov, S B; Vuillemin, V; White, A; Wielers, M; Wingerter-Seez, I; Wolters, H; Yamdagni, N; Yosef, C; Zaitsev, A; Zitoun, R; Zolnierowski, Y

    2002-01-01

    This paper discusses hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter (consisting of a lead-liquid argon electromagnetic part and an iron-scintillator hadronic part) in the framework of the nonparametrical method. The nonparametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to an easy use in a first level trigger. The reconstructed mean values of the hadron energies are within +or-1% of the true values and the fractional energy resolution is [(58+or-3)%/ square root E+(2.5+or-0.3)%](+)(1.7+or-0.2)/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74+or-0.04 and agrees with the prediction that e/h >1.66 for this electromagnetic calorimeter. Results of a study of the longitudinal hadronic shower development are also presented. The data have been taken in the H8 beam...

  9. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    Science.gov (United States)

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  10. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt;

    2013-01-01

    This paper presents a method for correction and alignment of global radiation observations based on information obtained from calculated global radiation, in the present study one-hour forecast of global radiation from a numerical weather prediction (NWP) model is used. Systematical errors detected...... in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...... University. The method can be useful for optimized use of solar radiation observations for forecasting, monitoring, and modeling of energy production and load which are affected by solar radiation....

  11. Non-parametric method for separating domestic hot water heating spikes and space heating

    DEFF Research Database (Denmark)

    Bacher, Peder; de Saint-Aubain, Philip Anton; Christiansen, Lasse Engbo;

    2016-01-01

    In this paper a method for separating spikes from a noisy data series, where the data change and evolve over time, is presented. The method is applied on measurements of the total heat load for a single family house. It relies on the fact that the domestic hot water heating is a process generating...... short-lived spikes in the time series, while the space heating changes in slower patterns during the day dependent on the climate and user behavior. The challenge is to separate the domestic hot water heating spikes from the space heating without affecting the natural noise in the space heating...... measurements. The assumption behind the developed method is that the space heating can be estimated by a non-parametric kernel smoother, such that every value significantly above this kernel smoother estimate is identified as a domestic hot water heating spike. First, it is showed how a basic kernel smoothing...

  12. Statistical analysis using the Bayesian nonparametric method for irradiation embrittlement of reactor pressure vessels

    Science.gov (United States)

    Takamizawa, Hisashi; Itoh, Hiroto; Nishiyama, Yutaka

    2016-10-01

    In order to understand neutron irradiation embrittlement in high fluence regions, statistical analysis using the Bayesian nonparametric (BNP) method was performed for the Japanese surveillance and material test reactor irradiation database. The BNP method is essentially expressed as an infinite summation of normal distributions, with input data being subdivided into clusters with identical statistical parameters, such as mean and standard deviation, for each cluster to estimate shifts in ductile-to-brittle transition temperature (DBTT). The clusters typically depend on chemical compositions, irradiation conditions, and the irradiation embrittlement. Specific variables contributing to the irradiation embrittlement include the content of Cu, Ni, P, Si, and Mn in the pressure vessel steels, neutron flux, neutron fluence, and irradiation temperatures. It was found that the measured shifts of DBTT correlated well with the calculated ones. Data associated with the same materials were subdivided into the same clusters even if neutron fluences were increased.

  13. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  14. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  15. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    Directory of Open Access Journals (Sweden)

    Silvia Rizzi

    2016-05-01

    Full Text Available Abstract Background Histograms are a common tool to estimate densities non-parametrically. They are extensively encountered in health sciences to summarize data in a compact format. Examples are age-specific distributions of death or onset of diseases grouped in 5-years age classes with an open-ended age group at the highest ages. When histogram intervals are too coarse, information is lost and comparison between histograms with different boundaries is arduous. In these cases it is useful to estimate detailed distributions from grouped data. Methods From an extensive literature search we identify five methods for ungrouping count data. We compare the performance of two spline interpolation methods, two kernel density estimators and a penalized composite link model first via a simulation study and then with empirical data obtained from the NORDCAN Database. All methods analyzed can be used to estimate differently shaped distributions; can handle unequal interval length; and allow stretches of 0 counts. Results The methods show similar performance when the grouping scheme is relatively narrow, i.e. 5-years age classes. With coarser age intervals, i.e. in the presence of open-ended age groups, the penalized composite link model performs the best. Conclusion We give an overview and test different methods to estimate detailed distributions from grouped count data. Health researchers can benefit from these versatile methods, which are ready for use in the statistical software R. We recommend using the penalized composite link model when data are grouped in wide age classes.

  16. A nonparametric Bayesian method of translating machine learning scores to probabilities in clinical decision support.

    Science.gov (United States)

    Connolly, Brian; Cohen, K Bretonnel; Santel, Daniel; Bayram, Ulya; Pestian, John

    2017-08-07

    Probabilistic assessments of clinical care are essential for quality care. Yet, machine learning, which supports this care process has been limited to categorical results. To maximize its usefulness, it is important to find novel approaches that calibrate the ML output with a likelihood scale. Current state-of-the-art calibration methods are generally accurate and applicable to many ML models, but improved granularity and accuracy of such methods would increase the information available for clinical decision making. This novel non-parametric Bayesian approach is demonstrated on a variety of data sets, including simulated classifier outputs, biomedical data sets from the University of California, Irvine (UCI) Machine Learning Repository, and a clinical data set built to determine suicide risk from the language of emergency department patients. The method is first demonstrated on support-vector machine (SVM) models, which generally produce well-behaved, well understood scores. The method produces calibrations that are comparable to the state-of-the-art Bayesian Binning in Quantiles (BBQ) method when the SVM models are able to effectively separate cases and controls. However, as the SVM models' ability to discriminate classes decreases, our approach yields more granular and dynamic calibrated probabilities comparing to the BBQ method. Improvements in granularity and range are even more dramatic when the discrimination between the classes is artificially degraded by replacing the SVM model with an ad hoc k-means classifier. The method allows both clinicians and patients to have a more nuanced view of the output of an ML model, allowing better decision making. The method is demonstrated on simulated data, various biomedical data sets and a clinical data set, to which diverse ML methods are applied. Trivially extending the method to (non-ML) clinical scores is also discussed.

  17. Trend Analysis of Golestan's Rivers Discharges Using Parametric and Non-parametric Methods

    Science.gov (United States)

    Mosaedi, Abolfazl; Kouhestani, Nasrin

    2010-05-01

    One of the major problems in human life is climate changes and its problems. Climate changes will cause changes in rivers discharges. The aim of this research is to investigate the trend analysis of seasonal and yearly rivers discharges of Golestan province (Iran). In this research four trend analysis method including, conjunction point, linear regression, Wald-Wolfowitz and Mann-Kendall, for analyzing of river discharges in seasonal and annual periods in significant level of 95% and 99% were applied. First, daily discharge data of 12 hydrometrics stations with a length of 42 years (1965-2007) were selected, after some common statistical tests such as, homogeneity test (by applying G-B and M-W tests), the four mentioned trends analysis tests were applied. Results show that in all stations, for summer data time series, there are decreasing trends with a significant level of 99% according to Mann-Kendall (M-K) test. For autumn time series data, all four methods have similar results. For other periods, the results of these four tests were more or less similar together. While, for some stations the results of tests were different. Keywords: Trend Analysis, Discharge, Non-parametric methods, Wald-Wolfowitz, The Mann-Kendall test, Golestan Province.

  18. Nonparametric Comparison of Two Dynamic Parameter Setting Methods in a Meta-Heuristic Approach

    Directory of Open Access Journals (Sweden)

    Seyhun HEPDOGAN

    2007-10-01

    Full Text Available Meta-heuristics are commonly used to solve combinatorial problems in practice. Many approaches provide very good quality solutions in a short amount of computational time; however most meta-heuristics use parameters to tune the performance of the meta-heuristic for particular problems and the selection of these parameters before solving the problem can require much time. This paper investigates the problem of setting parameters using a typical meta-heuristic called Meta-RaPS (Metaheuristic for Randomized Priority Search.. Meta-RaPS is a promising meta-heuristic optimization method that has been applied to different types of combinatorial optimization problems and achieved very good performance compared to other meta-heuristic techniques. To solve a combinatorial problem, Meta-RaPS uses two well-defined stages at each iteration: construction and local search. After a number of iterations, the best solution is reported. Meta-RaPS performance depends on the fine tuning of two main parameters, priority percentage and restriction percentage, which are used during the construction stage. This paper presents two different dynamic parameter setting methods for Meta-RaPS. These dynamic parameter setting approaches tune the parameters while a solution is being found. To compare these two approaches, nonparametric statistic approaches are utilized since the solutions are not normally distributed. Results from both these dynamic parameter setting methods are reported.

  19. Non-parametric method for measuring gas inhomogeneities from X-ray observations of galaxy clusters

    CERN Document Server

    Morandi, Andrea; Cui, Wei

    2013-01-01

    We present a non-parametric method to measure inhomogeneities in the intracluster medium (ICM) from X-ray observations of galaxy clusters. Analyzing mock Chandra X-ray observations of simulated clusters, we show that our new method enables the accurate recovery of the 3D gas density and gas clumping factor profiles out to large radii of galaxy clusters. We then apply this method to Chandra X-ray observations of Abell 1835 and present the first determination of the gas clumping factor from the X-ray cluster data. We find that the gas clumping factor in Abell 1835 increases with radius and reaches ~2-3 at r=R_{200}. This is in good agreement with the predictions of hydrodynamical simulations, but it is significantly below the values inferred from recent Suzaku observations. We further show that the radially increasing gas clumping factor causes flattening of the derived entropy profile of the ICM and affects physical interpretation of the cluster gas structure, especially at the large cluster-centric radii. Our...

  20. A method and a tool for geocoding and record linkage

    OpenAIRE

    CHARIF Omar; OMRANI Hichem; Klein, Olivier; Schneider, Marc; Trigano, Philippe

    2010-01-01

    For many years, researchers have presented the geocoding of postal addresses as a challenge. Several research works have been devoted to achieve the geocoding process. This paper presents theoretical and technical aspects for geolocalization, geocoding, and record linkage. It shows possibilities and limitations of existing methods and commercial software identifying areas for further research. In particular, we present a methodology and a computing tool allowing the correction and the geo-cod...

  1. Comparison of three nonparametric kriging methods for delineating heavy-metal contaminated soils

    Energy Technology Data Exchange (ETDEWEB)

    Juang, K.W.; Lee, D.Y

    2000-02-01

    The probability of pollutant concentrations greater than a cutoff value is useful for delineating hazardous areas in contaminated soils. It is essential for risk assessment and reclamation. In this study, three nonparametric kriging methods [indicator kriging, probability kriging, and kriging with the cumulative distribution function (CDF) of order statistics (CDF kriging)] were used to estimate the probability of heavy-metal concentrations lower than a cutoff value. In terms of methodology, the probability kriging estimator and CDF kriging estimator take into account the information of the order relation, which is not considered in indicator kriging. Since probability kriging has been shown to be better than indicator kriging for delineating contaminated soils, the performance of CDF kriging, which the authors propose, was compared with that of probability kriging in this study. A data set of soil Cd and Pb concentrations obtained from a 10-ha heavy-metal contaminated site in Taoyuan, Taiwan, was used. The results demonstrated that the probability kriging and CDF kriging estimations were more accurate than the indicator kriging estimation. On the other hand, because the probability kriging was based on the cokriging estimator, some unreliable estimates occurred in the probability kriging estimation. This indicated that probability kriging was not as robust as CDF kriging. Therefore, CDF kriging is more suitable than probability kriging for estimating the probability of heavy-metal concentrations lower than a cutoff value.

  2. Spatial and Spectral Nonparametric Linear Feature Extraction Method for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Jinn-Min Yang

    2016-11-01

    Full Text Available Feature extraction (FE or dimensionality reduction (DR plays quite an important role in the field of pattern recognition. Feature extraction aims to reduce the dimensionality of the high-dimensional dataset to enhance the classification accuracy and foster the classification speed, particularly when the training sample size is small, namely the small sample size (SSS problem. Remotely sensed hyperspectral images (HSIs are often with hundreds of measured features (bands which potentially provides more accurate and detailed information for classification, but it generally needs more samples to estimate parameters to achieve a satisfactory result. The cost of collecting ground-truth of remotely sensed hyperspectral scene can be considerably difficult and expensive. Therefore, FE techniques have been an important part for hyperspectral image classification. Unlike lots of feature extraction methods are based only on the spectral (band information of the training samples, some feature extraction methods integrating both spatial and spectral information of training samples show more effective results in recent years. Spatial contexture information has been proven to be useful to improve the HSI data representation and to increase classification accuracy. In this paper, we propose a spatial and spectral nonparametric linear feature extraction method for hyperspectral image classification. The spatial and spectral information is extracted for each training sample and used to design the within-class and between-class scatter matrices for constructing the feature extraction model. The experimental results on one benchmark hyperspectral image demonstrate that the proposed method obtains stable and satisfactory results than some existing spectral-based feature extraction.

  3. Assessment of water quality trends in the Minnesota River using non-parametric and parametric methods

    Science.gov (United States)

    Johnson, H.O.; Gupta, S.C.; Vecchia, A.V.; Zvomuya, F.

    2009-01-01

    Excessive loading of sediment and nutrients to rivers is a major problem in many parts of the United States. In this study, we tested the non-parametric Seasonal Kendall (SEAKEN) trend model and the parametric USGS Quality of Water trend program (QWTREND) to quantify trends in water quality of the Minnesota River at Fort Snelling from 1976 to 2003. Both methods indicated decreasing trends in flow-adjusted concentrations of total suspended solids (TSS), total phosphorus (TP), and orthophosphorus (OP) and a generally increasing trend in flow-adjusted nitrate plus nitrite-nitrogen (NO3-N) concentration. The SEAKEN results were strongly influenced by the length of the record as well as extreme years (dry or wet) earlier in the record. The QWTREND results, though influenced somewhat by the same factors, were more stable. The magnitudes of trends between the two methods were somewhat different and appeared to be associated with conceptual differences between the flow-adjustment processes used and with data processing methods. The decreasing trends in TSS, TP, and OP concentrations are likely related to conservation measures implemented in the basin. However, dilution effects from wet climate or additional tile drainage cannot be ruled out. The increasing trend in NO3-N concentrations was likely due to increased drainage in the basin. Since the Minnesota River is the main source of sediments to the Mississippi River, this study also addressed the rapid filling of Lake Pepin on the Mississippi River and found the likely cause to be increased flow due to recent wet climate in the region. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  4. APPLICATION OF PARAMETRIC AND NON-PARAMETRIC BENCHMARKING METHODS IN COST EFFICIENCY ANALYSIS OF THE ELECTRICITY DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Andrea Furková

    2007-06-01

    Full Text Available This paper explores the aplication of parametric and non-parametric benchmarking methods in measuring cost efficiency of Slovak and Czech electricity distribution companies. We compare the relative cost efficiency of Slovak and Czech distribution companies using two benchmarking methods: the non-parametric Data Envelopment Analysis (DEA and the Stochastic Frontier Analysis (SFA as the parametric approach. The first part of analysis was based on DEA models. Traditional cross-section CCR and BCC model were modified to cost efficiency estimation. In further analysis we focus on two versions of stochastic frontier cost functioin using panel data: MLE model and GLS model. These models have been applied to an unbalanced panel of 11 (Slovakia 3 and Czech Republic 8 regional electricity distribution utilities over a period from 2000 to 2004. The differences in estimated scores, parameters and ranking of utilities were analyzed. We observed significant differences between parametric methods and DEA approach.

  5. Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  6. Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    2003-01-01

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  7. Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    2003-01-01

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  8. Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  9. Revisiting the Distance Duality Relation using a non-parametric regression method

    Science.gov (United States)

    Rana, Akshay; Jain, Deepak; Mahajan, Shobhit; Mukherjee, Amitabha

    2016-07-01

    The interdependence of luminosity distance, DL and angular diameter distance, DA given by the distance duality relation (DDR) is very significant in observational cosmology. It is very closely tied with the temperature-redshift relation of Cosmic Microwave Background (CMB) radiation. Any deviation from η(z)≡ DL/DA (1+z)2 =1 indicates a possible emergence of new physics. Our aim in this work is to check the consistency of these relations using a non-parametric regression method namely, LOESS with SIMEX. This technique avoids dependency on the cosmological model and works with a minimal set of assumptions. Further, to analyze the efficiency of the methodology, we simulate a dataset of 020 points of η (z) data based on a phenomenological model η(z)= (1+z)epsilon. The error on the simulated data points is obtained by using the temperature of CMB radiation at various redshifts. For testing the distance duality relation, we use the JLA SNe Ia data for luminosity distances, while the angular diameter distances are obtained from radio galaxies datasets. Since the DDR is linked with CMB temperature-redshift relation, therefore we also use the CMB temperature data to reconstruct η (z). It is important to note that with CMB data, we are able to study the evolution of DDR upto a very high redshift z = 2.418. In this analysis, we find no evidence of deviation from η=1 within a 1σ region in the entire redshift range used in this analysis (0 < z <= 2.418).

  10. Statistic Non-Parametric Methods of Measurement and Interpretation of Existing Statistic Connections within Seaside Hydro Tourism

    OpenAIRE

    MIRELA SECARĂ

    2008-01-01

    Tourism represents an important field of economic and social life in our country, and the main sector of the economy of Constanta County is the balneary touristic capitalization of Romanian seaside. In order to statistically analyze hydro tourism on Romanian seaside, we have applied non-parametric methods of measuring and interpretation of existing statistic connections within seaside hydro tourism. Major objective of this research is represented by hydro tourism re-establishment on Romanian ...

  11. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  12. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  13. Using and comparing two nonparametric methods (CART and RF and SPOT-HRG satellite data to predictive tree diversity distribution

    Directory of Open Access Journals (Sweden)

    SIAVASH KALBI

    2014-05-01

    Full Text Available Kalbi S, Fallah A, Hojjati SM. 2014. Using and comparing two nonparametric methods (CART and RF and SPOT-HRG satellite data to predictive tree diversity distribution. Nusantara Bioscience 6: 57-62. The prediction of spatial distributions of tree species by means of survey data has recently been used for conservation planning. Numerous methods have been developed for building species habitat suitability models. The present study was carried out to find the possible proper relationships between tree species diversity indices and SPOT-HRG reflectance values in Hyrcanian forests, North of Iran. Two different modeling techniques, Classification and Regression Trees (CART and Random Forest (RF, were fitted to the data in order to find the most successfully model. Simpson, Shannon diversity and the reciprocal of Simpson indices were used for estimating tree diversity. After collecting terrestrial information on trees in the 100 samples, the tree diversity indices were calculated in each plot. RF with determinate coefficient and RMSE from 56.3 to 63.9 and RMSE from 0.15 to 0.84 has better results than CART algorithms with determinate coefficient 42.3 to 63.3 and RMSE from 0.188 to 0.88. Overall the results showed that the SPOT-HRG satellite data and nonparametric regression could be useful for estimating tree diversity in Hyrcanian forests, North of Iran.

  14. Structuring feature space: a non-parametric method for volumetric transfer function generation.

    Science.gov (United States)

    Maciejewski, Ross; Woo, Insoo; Chen, Wei; Ebert, David S

    2009-01-01

    The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation. We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial

  15. CURRENT STATUS OF NONPARAMETRIC STATISTICS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-02-01

    Full Text Available Nonparametric statistics is one of the five points of growth of applied mathematical statistics. Despite the large number of publications on specific issues of nonparametric statistics, the internal structure of this research direction has remained undeveloped. The purpose of this article is to consider its division into regions based on the existing practice of scientific activity determination of nonparametric statistics and classify investigations on nonparametric statistical methods. Nonparametric statistics allows to make statistical inference, in particular, to estimate the characteristics of the distribution and testing statistical hypotheses without, as a rule, weakly proven assumptions about the distribution function of samples included in a particular parametric family. For example, the widespread belief that the statistical data are often have the normal distribution. Meanwhile, analysis of results of observations, in particular, measurement errors, always leads to the same conclusion - in most cases the actual distribution significantly different from normal. Uncritical use of the hypothesis of normality often leads to significant errors, in areas such as rejection of outlying observation results (emissions, the statistical quality control, and in other cases. Therefore, it is advisable to use nonparametric methods, in which the distribution functions of the results of observations are imposed only weak requirements. It is usually assumed only their continuity. On the basis of generalization of numerous studies it can be stated that to date, using nonparametric methods can solve almost the same number of tasks that previously used parametric methods. Certain statements in the literature are incorrect that nonparametric methods have less power, or require larger sample sizes than parametric methods. Note that in the nonparametric statistics, as in mathematical statistics in general, there remain a number of unresolved problems

  16. Balancing of linkages and robot manipulators advanced methods with illustrative examples

    CERN Document Server

    Arakelian, Vigen

    2015-01-01

    In this book advanced balancing methods for planar and spatial linkages, hand operated and automatic robot manipulators are presented. It is organized into three main parts and eight chapters. The main parts are the introduction to balancing, the balancing of linkages and the balancing of robot manipulators. The review of state-of-the-art literature including more than 500 references discloses particularities of shaking force/moment balancing and gravity compensation methods. Then new methods for balancing of linkages are considered. Methods provided in the second part of the book deal with the partial and complete shaking force/moment balancing of various linkages. A new field for balancing methods applications is the design of mechanical systems for fast manipulation. Special attention is given to the shaking force/moment balancing of robot manipulators. Gravity balancing methods are also discussed. The suggested balancing methods are illustrated by numerous examples.

  17. Detecting correlation changes in multivariate time series: A comparison of four non-parametric change point detection methods.

    Science.gov (United States)

    Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva

    2017-06-01

    Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.

  18. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  19. Inferring the three-dimensional distribution of dust in the Galaxy with a non-parametric method: Preparing for Gaia

    CERN Document Server

    Kh., S Rezaei; Hanson, R J; Fouesneau, M

    2016-01-01

    We present a non-parametric model for inferring the three-dimensional (3D) distribution of dust density in the Milky Way. Our approach uses the extinction measured towards stars at different locations in the Galaxy at approximately known distances. Each extinction measurement is proportional to the integrated dust density along its line-of-sight. Making simple assumptions about the spatial correlation of the dust density, we can infer the most probable 3D distribution of dust across the entire observed region, including along sight lines which were not observed. This is possible because our model employs a Gaussian Process to connect all lines-of-sight. We demonstrate the capability of our model to capture detailed dust density variations using mock data as well as simulated data from the Gaia Universe Model Snapshot. We then apply our method to a sample of giant stars observed by APOGEE and Kepler to construct a 3D dust map over a small region of the Galaxy. Due to our smoothness constraint and its isotropy,...

  20. Comparison of non-parametric methods for ungrouping coarsely aggregated data

    DEFF Research Database (Denmark)

    Rizzi, Silvia; Thinggaard, Mikael; Engholm, Gerda;

    2016-01-01

    group at the highest ages. When histogram intervals are too coarse, information is lost and comparison between histograms with different boundaries is arduous. In these cases it is useful to estimate detailed distributions from grouped data. Methods From an extensive literature search we identify five...

  1. Linkage analysis in alcohol dependence.

    Science.gov (United States)

    Windemuth, C; Hahn, A; Strauch, K; Baur, M P; Wienker, T F

    1999-01-01

    Alcohol dependence often is a familial disorder and has a genetic component. Research in causative factors of alcoholism is coordinated by a multi-center program, COGA [The Collaborative Study on the Genetics of Alcoholism, Begleiter et al., 1995]. We analyzed a subset of the COGA family sample, 84 pedigrees of Caucasian ancestry comprising 745 persons, 339 of whom are affected according to DSM-III-R and Feighner criteria. Using parametric and nonparametric methods, evidence for linkage was found on chromosome 1 (near markers D1S532, D1S1588, and D1S534), as well as on chromosome 15 (near marker D15S642). Other regions of the genome showed suggestive evidence for contributing loci. Related findings are discussed in recent publications investigating linkage in humans [Reich et al., 1998] and mice [Melo et al., 1996].

  2. Nonparametric statistics for social and behavioral sciences

    CERN Document Server

    Kraska-MIller, M

    2013-01-01

    Introduction to Research in Social and Behavioral SciencesBasic Principles of ResearchPlanning for ResearchTypes of Research Designs Sampling ProceduresValidity and Reliability of Measurement InstrumentsSteps of the Research Process Introduction to Nonparametric StatisticsData AnalysisOverview of Nonparametric Statistics and Parametric Statistics Overview of Parametric Statistics Overview of Nonparametric StatisticsImportance of Nonparametric MethodsMeasurement InstrumentsAnalysis of Data to Determine Association and Agreement Pearson Chi-Square Test of Association and IndependenceContingency

  3. Semi- and Nonparametric ARCH Processes

    Directory of Open Access Journals (Sweden)

    Oliver B. Linton

    2011-01-01

    Full Text Available ARCH/GARCH modelling has been successfully applied in empirical finance for many years. This paper surveys the semiparametric and nonparametric methods in univariate and multivariate ARCH/GARCH models. First, we introduce some specific semiparametric models and investigate the semiparametric and nonparametrics estimation techniques applied to: the error density, the functional form of the volatility function, the relationship between mean and variance, long memory processes, locally stationary processes, continuous time processes and multivariate models. The second part of the paper is about the general properties of such processes, including stationary conditions, ergodic conditions and mixing conditions. The last part is on the estimation methods in ARCH/GARCH processes.

  4. Record: a novel method for ordering loci on a genetic linkage map

    NARCIS (Netherlands)

    Os, van H.; Stam, P.; Visser, R.G.F.; Eck, van H.J.

    2005-01-01

    A new method, REcombination Counting and ORDering (RECORD) is presented for the ordering of loci on genetic linkage maps. The method minimizes the total number of recombination events. The search algorithm is a heuristic procedure, combining elements of branch-and-bound with local reshuffling. Since

  5. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  6. Estimation from PET data of transient changes in dopamine concentration induced by alcohol: support for a non-parametric signal estimation method

    Energy Technology Data Exchange (ETDEWEB)

    Constantinescu, C C; Yoder, K K; Normandin, M D; Morris, E D [Department of Radiology, Indiana University School of Medicine, Indianapolis, IN (United States); Kareken, D A [Department of Neurology, Indiana University School of Medicine, Indianapolis, IN (United States); Bouman, C A [Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN (United States); O' Connor, S J [Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN (United States)], E-mail: emorris@iupui.edu

    2008-03-07

    We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest and activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (F{sup DA}(t)) and the change in binding potential ({delta}BP). The veracity of the F{sup DA}(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) {delta}BP should decline with increasing DA peak time, (2) {delta}BP should increase as the strength of the temporal correlation between F{sup DA}(t) and the free raclopride (F{sup RAC}(t)) curve increases, (3) {delta}BP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [{sup 11}C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover F{sup DA}(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the F{sup DA}(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of F{sup DA}(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.

  7. Nonparametric estimation of ultrasound pulses

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Leeman, Sidney

    1994-01-01

    An algorithm for nonparametric estimation of 1D ultrasound pulses in echo sequences from human tissues is derived. The technique is a variation of the homomorphic filtering technique using the real cepstrum, and the underlying basis of the method is explained. The algorithm exploits a priori...

  8. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  9. Predicting protein linkages in bacteria: Which method is best depends on task

    Directory of Open Access Journals (Sweden)

    Leach Sonia M

    2008-09-01

    Full Text Available Abstract Background Applications of computational methods for predicting protein functional linkages are increasing. In recent years, several bacteria-specific methods for predicting linkages have been developed. The four major genomic context methods are: Gene cluster, Gene neighbor, Rosetta Stone, and Phylogenetic profiles. These methods have been shown to be powerful tools and this paper provides guidelines for when each method is appropriate by exploring different features of each method and potential improvements offered by their combination. We also review many previous treatments of these prediction methods, use the latest available annotations, and offer a number of new observations. Results Using Escherichia coli K12 and Bacillus subtilis, linkage predictions made by each of these methods were evaluated against three benchmarks: functional categories defined by COG and KEGG, known pathways listed in EcoCyc, and known operons listed in RegulonDB. Each evaluated method had strengths and weaknesses, with no one method dominating all aspects of predictive ability studied. For functional categories, as previous studies have shown, the Rosetta Stone method was individually best at detecting linkages and predicting functions among proteins with shared KEGG categories while the Phylogenetic profile method was best for linkage detection and function prediction among proteins with common COG functions. Differences in performance under COG versus KEGG may be attributable to the presence of paralogs. Better function prediction was observed when using a weighted combination of linkages based on reliability versus using a simple unweighted union of the linkage sets. For pathway reconstruction, 99 complete metabolic pathways in E. coli K12 (out of the 209 known, non-trivial pathways and 193 pathways with 50% of their proteins were covered by linkages from at least one method. Gene neighbor was most effective individually on pathway reconstruction

  10. Meta-analysis of genome-wide linkage studies in BMI and obesity

    NARCIS (Netherlands)

    Saunders, Catherine L.; Chiodini, Benedetta D.; Sham, Pak; Lewis, Cathryn M.; Abkevich, Victor; Adeyemo, Adebowale A.; de Andrade, Mariza; Arya, Rector; Berenson, Gerald S.; Blangero, John; Boehnke, Michael; Borecki, Ingrid B.; Chagnon, Yvon C.; Chen, Wei; Comuzzie, Anthony G.; Deng, Hong-Wen; Duggirala, Ravindranath; Feitosa, Mary F.; Froguel, Philippe; Hanson, Robert L.; Hebebrand, Johannes; Huezo-Dias, Patricia; Kissebah, Ahmed H.; Li, Weidong; Luke, Amy; Martin, Lisa J.; Nash, Matthew; Ohman, Muena; Palmer, Lyle J.; Peltonen, Leena; Perola, Markus; Price, R. Arlen; Redline, Susan; Srinivasan, Sathanur R.; Stern, Michael P.; Stone, Steven; Stringham, Heather; Turner, Stephen; Wijmenga, Cisca; Collier, David A.

    Objective: The objective was to provide an overall assessment of genetic linkage data of BMI and BMI-defined obesity using a nonparametric genome scan meta-analysis. Research Methods and Procedures: We identified 37 published studies containing data on over 31,000 individuals from more than >10,000

  11. Meta-analysis of genome-wide linkage studies in BMI and obesity

    NARCIS (Netherlands)

    Saunders, Catherine L.; Chiodini, Benedetta D.; Sham, Pak; Lewis, Cathryn M.; Abkevich, Victor; Adeyemo, Adebowale A.; de Andrade, Mariza; Arya, Rector; Berenson, Gerald S.; Blangero, John; Boehnke, Michael; Borecki, Ingrid B.; Chagnon, Yvon C.; Chen, Wei; Comuzzie, Anthony G.; Deng, Hong-Wen; Duggirala, Ravindranath; Feitosa, Mary F.; Froguel, Philippe; Hanson, Robert L.; Hebebrand, Johannes; Huezo-Dias, Patricia; Kissebah, Ahmed H.; Li, Weidong; Luke, Amy; Martin, Lisa J.; Nash, Matthew; Ohman, Muena; Palmer, Lyle J.; Peltonen, Leena; Perola, Markus; Price, R. Arlen; Redline, Susan; Srinivasan, Sathanur R.; Stern, Michael P.; Stone, Steven; Stringham, Heather; Turner, Stephen; Wijmenga, Cisca; Collier, David A.

    2007-01-01

    Objective: The objective was to provide an overall assessment of genetic linkage data of BMI and BMI-defined obesity using a nonparametric genome scan meta-analysis. Research Methods and Procedures: We identified 37 published studies containing data on over 31,000 individuals from more than >10,000

  12. Method to eliminate flux linkage DC component in load transformer for static transfer switch.

    Science.gov (United States)

    He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing

    2014-01-01

    Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.

  13. Method to Eliminate Flux Linkage DC Component in Load Transformer for Static Transfer Switch

    Directory of Open Access Journals (Sweden)

    Yu He

    2014-01-01

    Full Text Available Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2~30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.

  14. Genome-wide high-density SNP linkage search for glioma susceptibility loci: results from the Gliogene Consortium

    DEFF Research Database (Denmark)

    Shete, Sanjay; Lau, Ching C; Houlston, Richard S;

    2011-01-01

    -fold increased risk of glioma, the search for susceptibility loci in familial forms of the disease has been challenging because the disease is relatively rare, fatal, and heterogeneous, making it difficult to collect sufficient biosamples from families for statistical power. To address this challenge...... nonparametric (model-free) methods. After removal of high linkage disequilibrium single-nucleotide polymorphism, we obtained a maximum nonparametric linkage score (NPL) of 3.39 (P = 0.0005) at 17q12-21.32 and the Z-score of 4.20 (P = 0.000007). To replicate our findings, we genotyped 29 independent U...

  15. Model-free linkage analysis of a binary trait.

    Science.gov (United States)

    Xu, Wei; Bull, Shelley B; Mirea, Lucia; Greenwood, Celia M T

    2012-01-01

    Genetic linkage analysis aims to detect chromosomal regions containing genes that influence risk of specific inherited diseases. The presence of linkage is indicated when a disease or trait cosegregates through the families with genetic markers at a particular region of the genome. Two main types of genetic linkage analysis are in common use, namely model-based linkage analysis and model-free linkage analysis. In this chapter, we focus solely on the latter type and specifically on binary traits or phenotypes, such as the presence or absence of a specific disease. Model-free linkage analysis is based on allele-sharing, where patterns of genetic similarity among affected relatives are compared to chance expectations. Because the model-free methods do not require the specification of the inheritance parameters of a genetic model, they are preferred by many researchers at early stages in the study of a complex disease. We introduce the history of model-free linkage analysis in Subheading 1. Table 1 describes a standard model-free linkage analysis workflow. We describe three popular model-free linkage analysis methods, the nonparametric linkage (NPL) statistic, the affected sib-pair (ASP) likelihood ratio test, and a likelihood approach for pedigrees. The theory behind each linkage test is described in this section, together with a simple example of the relevant calculations. Table 4 provides a summary of popular genetic analysis software packages that implement model-free linkage models. In Subheading 2, we work through the methods on a rich example providing sample software code and output. Subheading 3 contains notes with additional details on various topics that may need further consideration during analysis.

  16. Non-parametric methods – Tree and P-CFA – for the ecological evaluation and assessment of suitable aquatic habitats: A contribution to fish psychology

    Directory of Open Access Journals (Sweden)

    Andreas H. Melcher

    2012-09-01

    Full Text Available This study analyses multidimensional spawning habitat suitability of the fish species “Nase” (latin: Chondrostoma nasus. This is the first time non-parametric methods were used to better understand biotic habitat use in theory and practice. In particular, we tested (1 the Decision Tree technique, Chi-squared Automatic Interaction Detectors (CHAID, to identify specific habitat types and (2 Prediction-Configural Frequency Analysis (P-CFA to test for statistical significance. The combination of both non-parametric methods, CHAID and P-CFA, enabled the identification, prediction and interpretation of most typical significant spawning habitats, and we were also able to determine non-typical habitat types, e.g., types in contrast to antitypes. The gradual combination of these two methods underlined three significant habitat types: shaded habitat, fine and coarse substrate habitat depending on high flow velocity. The study affirmed the importance for fish species of shading and riparian vegetation along river banks. In addition, this method provides a weighting of interactions between specific habitat characteristics. The results demonstrate that efficient river restoration requires re-establishing riparian vegetation as well as the open river continuum and hydro-morphological improvements to habitats.

  17. Nonparametric Econometrics: The np Package

    Directory of Open Access Journals (Sweden)

    Tristen Hayfield

    2008-07-01

    Full Text Available We describe the R np package via a series of applications that may be of interest to applied econometricians. The np package implements a variety of nonparametric and semiparametric kernel-based estimators that are popular among econometricians. There are also procedures for nonparametric tests of significance and consistent model specification tests for parametric mean regression models and parametric quantile regression models, among others. The np package focuses on kernel methods appropriate for the mix of continuous, discrete, and categorical data often found in applied settings. Data-driven methods of bandwidth selection are emphasized throughout, though we caution the user that data-driven bandwidth selection methods can be computationally demanding.

  18. Quantal Response: Nonparametric Modeling

    Science.gov (United States)

    2017-01-01

    spline N−spline Fig. 3 Logistic regression 7 Approved for public release; distribution is unlimited. 5. Nonparametric QR Models Nonparametric linear ...stimulus and probability of response. The Generalized Linear Model approach does not make use of the limit distribution but allows arbitrary functional...7. Conclusions and Recommendations 18 8. References 19 Appendix A. The Linear Model 21 Appendix B. The Generalized Linear Model 33 Appendix C. B

  19. Optimal synthesis of a four-bar linkage by method of controlled deviation

    Directory of Open Access Journals (Sweden)

    Bulatović Radovan R.

    2004-01-01

    Full Text Available This paper considers optimal synthesis of a four-bar linkage by method of controlled deviations. The advantage of this approximate method is that it allows control of motion of the coupler in the four-bar linkage so that the path of the coupler is in the prescribed environment around the given path on the segment observed. The Hooke-Jeeves’s optimization algorithm has been used in the optimization process. Calculation expressions are not used as the method of direct searching, i.e. individual comparison of the calculated value of the objective function is made in each iteration and the moving is done in the direction of decreasing the value of the objective function. This algorithm does not depend on the initial selection of the projected variables. All this is illustrated on an example of synthesis of a four-bar linkage whose coupler point traces a straight line, i.e. passes through sixteen prescribed points lying on one straight line. .

  20. Parametric and Non-Parametric System Modelling

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg

    1999-01-01

    considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how....... For this purpose non-parametric methods together with additive models are suggested. Also, a new approach specifically designed to detect non-linearities is introduced. Confidence intervals are constructed by use of bootstrapping. As a link between non-parametric and parametric methods a paper dealing with neural...... the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...

  1. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  2. Nonparametric Inference for Periodic Sequences

    KAUST Repository

    Sun, Ying

    2012-02-01

    This article proposes a nonparametric method for estimating the period and values of a periodic sequence when the data are evenly spaced in time. The period is estimated by a "leave-out-one-cycle" version of cross-validation (CV) and complements the periodogram, a widely used tool for period estimation. The CV method is computationally simple and implicitly penalizes multiples of the smallest period, leading to a "virtually" consistent estimator of integer periods. This estimator is investigated both theoretically and by simulation.We also propose a nonparametric test of the null hypothesis that the data have constantmean against the alternative that the sequence of means is periodic. Finally, our methodology is demonstrated on three well-known time series: the sunspots and lynx trapping data, and the El Niño series of sea surface temperatures. © 2012 American Statistical Association and the American Society for Quality.

  3. A Linkage Matching Method for Road and Habitation by Using Urban Skeleton Line Network

    Directory of Open Access Journals (Sweden)

    LIU Chuang

    2016-12-01

    Full Text Available Obvious data consistency degree is not high in roads or habitation data, often in the presence of large geometric position deviation, which is not conducive to improve the accuracy and efficiency of road or habitation matching. A linkage matching method for road and habitation by using urban skeleton line network is proposed to solve this problem. The linkage matching imitates the human thinking process of searching for target objects by the signal features and spatial correlation when reading maps, regarding matching as a reasoning process of goal feature searching and information association transmitting. Firstly, urban skeleton line network is constructed by constraint Delaunay triangulation network; then, the topological relationship among road, skeleton line, skeleton line mesh, habitation is constructed; last, matching transmission model is established by the topological relationship. According to this matching transmission model, linkage matching is fulfilled, which contains road matching drives habitation matching or habitation matching drives road matching. The advantage of this method is that as long as there is an element of data consistency is good, can drive another element to obtain a very good matching effect, at the same time conform to the human cognitive process.

  4. Importance sampling method of correction for multiple testing in affected sib-pair linkage analysis

    OpenAIRE

    Klein, Alison P.; Kovac, Ilija; Sorant, Alexa JM; Baffoe-Bonnie, Agnes; Doan, Betty Q; Ibay, Grace; Lockwood, Erica; Mandal, Diptasri; Santhosh, Lekshmi; Weissbecker, Karen; Woo, Jessica; Zambelli-Weiner, April; Zhang, Jie; Naiman, Daniel Q.; Malley, James

    2003-01-01

    Using the Genetic Analysis Workshop 13 simulated data set, we compared the technique of importance sampling to several other methods designed to adjust p-values for multiple testing: the Bonferroni correction, the method proposed by Feingold et al., and naïve Monte Carlo simulation. We performed affected sib-pair linkage analysis for each of the 100 replicates for each of five binary traits and adjusted the derived p-values using each of the correction methods. The type I error rates for each...

  5. Fine mapping of multiple QTL using combined linkage and linkage disequilibrium mapping – A comparison of single QTL and multi QTL methods

    Directory of Open Access Journals (Sweden)

    Meuwissen Theo HE

    2007-04-01

    Full Text Available Abstract Two previously described QTL mapping methods, which combine linkage analysis (LA and linkage disequilibrium analysis (LD, were compared for their ability to detect and map multiple QTL. The methods were tested on five different simulated data sets in which the exact QTL positions were known. Every simulated data set contained two QTL, but the distances between these QTL were varied from 15 to 150 cM. The results show that the single QTL mapping method (LDLA gave good results as long as the distance between the QTL was large (> 90 cM. When the distance between the QTL was reduced, the single QTL method had problems positioning the two QTL and tended to position only one QTL, i.e. a "ghost" QTL, in between the two real QTL positions. The multi QTL mapping method (MP-LDLA gave good results for all evaluated distances between the QTL. For the large distances between the QTL (> 90 cM the single QTL method more often positioned the QTL in the correct marker bracket, but considering the broader likelihood peaks of the single point method it could be argued that the multi QTL method was more precise. Since the distances were reduced the multi QTL method was clearly more accurate than the single QTL method. The two methods combine well, and together provide a good tool to position single or multiple QTL in practical situations, where the number of QTL and their positions are unknown.

  6. Nonparametric regression with filtered data

    CERN Document Server

    Linton, Oliver; Nielsen, Jens Perch; Van Keilegom, Ingrid; 10.3150/10-BEJ260

    2011-01-01

    We present a general principle for estimating a regression function nonparametrically, allowing for a wide variety of data filtering, for example, repeated left truncation and right censoring. Both the mean and the median regression cases are considered. The method works by first estimating the conditional hazard function or conditional survivor function and then integrating. We also investigate improved methods that take account of model structure such as independent errors and show that such methods can improve performance when the model structure is true. We establish the pointwise asymptotic normality of our estimators.

  7. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo

    2013-06-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric and based on the asymptotic distribution of the empirical copula process.We perform simulation experiments to evaluate our test and conclude that our method is reliable and powerful for assessing common assumptions on the structure of copulas, particularly when the sample size is moderately large. We illustrate our testing approach on two datasets. © 2013 American Statistical Association.

  8. Nonparametric Predictive Regression

    OpenAIRE

    Ioannis Kasparis; Elena Andreou; Phillips, Peter C.B.

    2012-01-01

    A unifying framework for inference is developed in predictive regressions where the predictor has unknown integration properties and may be stationary or nonstationary. Two easily implemented nonparametric F-tests are proposed. The test statistics are related to those of Kasparis and Phillips (2012) and are obtained by kernel regression. The limit distribution of these predictive tests holds for a wide range of predictors including stationary as well as non-stationary fractional and near unit...

  9. Linkage analysis of chromosome 14 and essential hypertension in Chinese population

    Institute of Scientific and Technical Information of China (English)

    ZHAO Wei-yan; HUANG Jian-feng; GE Dong-liang; SU Shao-yong; LI Biao; GU Dong-feng

    2005-01-01

    Background Hypertension is a complex biological trait that influenced by multiple factors. The encouraging results for hypertension research showed that the linkage analysis can be used to replicate other studies and discover new genetic risk factors. Previous studies linked human chromosome 14 to essential hypertension or blood pressure traits. With a Chinese population, we tried to replicate these findings. Methods A linkage scan was performed on chromosome 14 with 14-microsatellite markers with a density of about 10 centi Morgen (cM) in 147 Chinese hypertensive nuclear families. Multipoint non-parametric linkage analysis and exclusion mapping were performed with the GENEHUNTER software, whereas quantitative analysis was performed with the variance component method integrated in the SOLAR package. Results In the qualitative analysis, the highest non-parametric linkage score is 1.0 (P=0.14) at D14S261 in the single point analysis, and no loci achieved non-parametric linkage score more than 1.0 in the multipoint analysis. Maximum-likelihood mapping showed no significant results, either. Subsequently the traditional exclusion criteria of the log-of-the-odds score-2 were adopted, and the chromosome 14 with λs≥2.4 was excluded. In the quantitative analysis of blood pressure with the SOLAR software, two-point analysis and multipoint analysis suggested no evidence for linkage occurred on chromosome 14 for systolic and diastolic blood pressure. Conclusion There was no substantial evidence to support the linkage of chromosome 14 and essential hypertension or blood pressure trait in Chinese hypertensive subjects in this study.

  10. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  11. Application of non-parametric bootstrap methods to estimate confidence intervals for QTL location in a beef cattle QTL experimental population.

    Science.gov (United States)

    Jongjoo, Kim; Davis, Scott K; Taylor, Jeremy F

    2002-06-01

    Empirical confidence intervals (CIs) for the estimated quantitative trait locus (QTL) location from selective and non-selective non-parametric bootstrap resampling methods were compared for a genome scan involving an Angus x Brahman reciprocal fullsib backcross population. Genetic maps, based on 357 microsatellite markers, were constructed for 29 chromosomes using CRI-MAP V2.4. Twelve growth, carcass composition and beef quality traits (n = 527-602) were analysed to detect QTLs utilizing (composite) interval mapping approaches. CIs were investigated for 28 likelihood ratio test statistic (LRT) profiles for the one QTL per chromosome model. The CIs from the non-selective bootstrap method were largest (87 7 cM average or 79-2% coverage of test chromosomes). The Selective II procedure produced the smallest CI size (42.3 cM average). However, CI sizes from the Selective II procedure were more variable than those produced by the two LOD drop method. CI ranges from the Selective II procedure were also asymmetrical (relative to the most likely QTL position) due to the bias caused by the tendency for the estimated QTL position to be at a marker position in the bootstrap samples and due to monotonicity and asymmetry of the LRT curve in the original sample.

  12. Multiatlas segmentation as nonparametric regression.

    Science.gov (United States)

    Awate, Suyash P; Whitaker, Ross T

    2014-09-01

    This paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases; such methods are termed as label fusion or multiatlas segmentation. We model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches. We analyze the nonparametric estimator's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database. We show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem (determined by the chosen anatomical structure, imaging modality, registration algorithm, and label-fusion algorithm). We describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically. We use these parameter estimates to optimize the regression estimator. We show that the expected error for large database sizes is well predicted by models learned on small databases. Thus, a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level. Such cost-benefit analysis is crucial for deploying clinical multiatlas segmentation systems.

  13. Recent Advances and Trends in Nonparametric Statistics

    CERN Document Server

    Akritas, MG

    2003-01-01

    The advent of high-speed, affordable computers in the last two decades has given a new boost to the nonparametric way of thinking. Classical nonparametric procedures, such as function smoothing, suddenly lost their abstract flavour as they became practically implementable. In addition, many previously unthinkable possibilities became mainstream; prime examples include the bootstrap and resampling methods, wavelets and nonlinear smoothers, graphical methods, data mining, bioinformatics, as well as the more recent algorithmic approaches such as bagging and boosting. This volume is a collection o

  14. A nonparametric dynamic additive regression model for longitudinal data

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas H.

    2000-01-01

    dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models......dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models...

  15. Investigation of Flux-Linkage Profile Measurement Methods for Switched-Reluctance Motors and Permanent-Magnet Motors

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2009-01-01

    own limitations and constraints. In this paper, first an investigation and comparison of existing measurement methods is reported. Next, a simple pure AC flux linkage measurement method is discussed and evaluated, which does not require a search coil. Experimental results are given, obtained using...

  16. Core requirements for successful data linkage: an example of a triangulation method.

    Science.gov (United States)

    Hopf, Y M; Francis, J; Helms, P J; Haughney, J; Bond, C

    2016-10-21

    The aim was to explore the views of professional stakeholders and healthcare professionals (HCPs) on the linkage of UK National Health Service (NHS) data for paediatric pharmacovigilance purposes and to make recommendations for such a system. A mixed methods approach including a literature review, interviews, focus groups and a three-round Delphi survey with HCPs in Scotland was followed by a triangulation process using a systematic protocol. The survey was structured using the Theoretical Domains Framework of behaviour change. Items retained after applying the matrix-based triangulation process were thematically coded. Ethical approval was granted by the North of Scotland Research Ethics Service. Results from 18 papers, 23 interviewees, 23 participants of focus groups and 61 completed questionnaires in the Delphi survey contributed to the triangulation process. A total of 25 key findings from all four studies were identified during triangulation. There was good convergence; 21 key findings were agreed and remained to inform recommendations. The items were coded as practical/technical (eg, decision about the unique patient identifier to use), mandatory (eg, governed by statute), essential (consistently mentioned in all studies and therefore needed to ensure professional support) or preferable. The development of a paediatric linked database has support from professional stakeholders and HCPs in Scotland. The triangulation identified three sets of core requirements for a new system of data linkage. An additional fourth set of 'preferable' requirements might increase engagement of HCPs and their support for the new system. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. Asymptotic estimation theory of multipoint linkage analysis under perfect marker information

    OpenAIRE

    Hössjer, Ola

    2003-01-01

    We consider estimation of a disease susceptibility locus $\\tau$ at a chromosome. With perfect marker data available, the estimator $\\htau_N$ of $\\tau$ based on $N$-pedigrees has a rate of convergence $N^{-1}$ under mild regularity conditions. The limiting distribution is the arg max of a certain compound Poisson process. Our approach is conditional on observed phenotypes, and therefore treats parametric and nonparametric linkage, as well as quantitative trait loci methods within a unified fra...

  18. An alternative approach to the ground motion prediction problem by a non-parametric adaptive regression method

    Science.gov (United States)

    Yerlikaya-Özkurt, Fatma; Askan, Aysegul; Weber, Gerhard-Wilhelm

    2014-12-01

    Ground Motion Prediction Equations (GMPEs) are empirical relationships which are used for determining the peak ground response at a particular distance from an earthquake source. They relate the peak ground responses as a function of earthquake source type, distance from the source, local site conditions where the data are recorded and finally the depth and magnitude of the earthquake. In this article, a new prediction algorithm, called Conic Multivariate Adaptive Regression Splines (CMARS), is employed on an available dataset for deriving a new GMPE. CMARS is based on a special continuous optimization technique, conic quadratic programming. These convex optimization problems are very well-structured, resembling linear programs and, hence, permitting the use of interior point methods. The CMARS method is performed on the strong ground motion database of Turkey. Results are compared with three other GMPEs. CMARS is found to be effective for ground motion prediction purposes.

  19. IMPROVED HOMOTOPY ITERATION METHOD AND APPLIED TO THE NINE-POINT PATH SYNTHESIS PROBLEM FOR FOUR-BAR LINKAGES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new algorithm called homotopy iteration method based on the homotopy function is studied and improved. By the improved homotopy iteration method, polynomial systems with high order and deficient can be solved fast and efficiently comparing to the original homotopy iteration method. Numerical examples for the ninepoint path synthesis of four-bar linkages show the advantages and efficiency of the improved homotopy iteration method.

  20. Non-parametric linear regression of discrete Fourier transform convoluted chromatographic peak responses under non-ideal conditions of internal standard method.

    Science.gov (United States)

    Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Fahmy, Ossama T; Ragab, Marwa A A

    2010-11-15

    This manuscript discusses the application of chemometrics to the handling of HPLC response data using the internal standard method (ISM). This was performed on a model mixture containing terbutaline sulphate, guaiphenesin, bromhexine HCl, sodium benzoate and propylparaben as an internal standard. Derivative treatment of chromatographic response data of analyte and internal standard was followed by convolution of the resulting derivative curves using 8-points sin x(i) polynomials (discrete Fourier functions). The response of each analyte signal, its corresponding derivative and convoluted derivative data were divided by that of the internal standard to obtain the corresponding ratio data. This was found beneficial in eliminating different types of interferences. It was successfully applied to handle some of the most common chromatographic problems and non-ideal conditions, namely: overlapping chromatographic peaks and very low analyte concentrations. For example, a significant change in the correlation coefficient of sodium benzoate, in case of overlapping peaks, went from 0.9975 to 0.9998 on applying normal conventional peak area and first derivative under Fourier functions methods, respectively. Also a significant improvement in the precision and accuracy for the determination of synthetic mixtures and dosage forms in non-ideal cases was achieved. For example, in the case of overlapping peaks guaiphenesin mean recovery% and RSD% went from 91.57, 9.83 to 100.04, 0.78 on applying normal conventional peak area and first derivative under Fourier functions methods, respectively. This work also compares the application of Theil's method, a non-parametric regression method, in handling the response ratio data, with the least squares parametric regression method, which is considered the de facto standard method used for regression. Theil's method was found to be superior to the method of least squares as it assumes that errors could occur in both x- and y-directions and

  1. Thirty years of nonparametric item response theory

    NARCIS (Netherlands)

    Molenaar, W.

    2001-01-01

    Relationships between a mathematical measurement model and its real-world applications are discussed. A distinction is made between large data matrices commonly found in educational measurement and smaller matrices found in attitude and personality measurement. Nonparametric methods are evaluated fo

  2. Nonparametric confidence intervals for monotone functions

    NARCIS (Netherlands)

    Groeneboom, P.; Jongbloed, G.

    2015-01-01

    We study nonparametric isotonic confidence intervals for monotone functions. In [Ann. Statist. 29 (2001) 1699–1731], pointwise confidence intervals, based on likelihood ratio tests using the restricted and unrestricted MLE in the current status model, are introduced. We extend the method to the trea

  3. Nonparametric confidence intervals for monotone functions

    NARCIS (Netherlands)

    Groeneboom, P.; Jongbloed, G.

    2015-01-01

    We study nonparametric isotonic confidence intervals for monotone functions. In [Ann. Statist. 29 (2001) 1699–1731], pointwise confidence intervals, based on likelihood ratio tests using the restricted and unrestricted MLE in the current status model, are introduced. We extend the method to the

  4. Characterizing Ipomopsis rubra (Polemoniaceae) germination under various thermal scenarios with non-parametric and semi-parametric statistical methods.

    Science.gov (United States)

    Pérez, Hector E; Kettner, Keith

    2013-10-01

    Time-to-event analysis represents a collection of relatively new, flexible, and robust statistical techniques for investigating the incidence and timing of transitions from one discrete condition to another. Plant biology is replete with examples of such transitions occurring from the cellular to population levels. However, application of these statistical methods has been rare in botanical research. Here, we demonstrate the use of non- and semi-parametric time-to-event and categorical data analyses to address questions regarding seed to seedling transitions of Ipomopsis rubra propagules exposed to various doses of constant or simulated seasonal diel temperatures. Seeds were capable of germinating rapidly to >90 % at 15-25 or 22/11-29/19 °C. Optimum temperatures for germination occurred at 25 or 29/19 °C. Germination was inhibited and seed viability decreased at temperatures ≥30 or 33/24 °C. Kaplan-Meier estimates of survivor functions indicated highly significant differences in temporal germination patterns for seeds exposed to fluctuating or constant temperatures. Extended Cox regression models specified an inverse relationship between temperature and the hazard of germination. Moreover, temperature and the temperature × day interaction had significant effects on germination response. Comparisons to reference temperatures and linear contrasts suggest that summer temperatures (33/24 °C) play a significant role in differential germination responses. Similarly, simple and complex comparisons revealed that the effects of elevated temperatures predominate in terms of components of seed viability. In summary, the application of non- and semi-parametric analyses provides appropriate, powerful data analysis procedures to address various topics in seed biology and more widespread use is encouraged.

  5. A Method for Achieving Constant Rotation Rates in a Micro-Orthogonal Linkage System

    Energy Technology Data Exchange (ETDEWEB)

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-05-12

    Silicon micromachine designs include engines that consist of orthog- onally oriented linear comb drive actuators mechanically connected to a rotating gear. These gears are as small as 50 {micro}m in diameter and can be driven at rotation rates exceeding 300,000 rpm. Generally, these en- gines will run with non-uniform rotation rates if the drive signals are not properly designed and maintained over a range of system parameters. We present a method for producing constant rotation rates in a micro-engine driven by an orthogonal linkage system. We show that provided the val- ues of certain masses, springs, damping factors, and lever arms are in the right proportions, the system behaves as though it were symmetrical. We will refer to systems built in this way as being quasi-symmetrical. We show that if a system is built quasi-symmetrically , then it is possible to achieve constant rotation rates even if one does not know the form of the friction function, or the value of the friction. We analyze this case in some detail.

  6. Nonparametric tests for censored data

    CERN Document Server

    Bagdonavicus, Vilijandas; Nikulin, Mikhail

    2013-01-01

    This book concerns testing hypotheses in non-parametric models. Generalizations of many non-parametric tests to the case of censored and truncated data are considered. Most of the test results are proved and real applications are illustrated using examples. Theories and exercises are provided. The incorrect use of many tests applying most statistical software is highlighted and discussed.

  7. Biometric Authentication using Nonparametric Methods

    CERN Document Server

    Sheela, S V; 10.5121/ijcsit.2010.2309

    2010-01-01

    The physiological and behavioral trait is employed to develop biometric authentication systems. The proposed work deals with the authentication of iris and signature based on minimum variance criteria. The iris patterns are preprocessed based on area of the connected components. The segmented image used for authentication consists of the region with large variations in the gray level values. The image region is split into quadtree components. The components with minimum variance are determined from the training samples. Hu moments are applied on the components. The summation of moment values corresponding to minimum variance components are provided as input vector to k-means and fuzzy kmeans classifiers. The best performance was obtained for MMU database consisting of 45 subjects. The number of subjects with zero False Rejection Rate [FRR] was 44 and number of subjects with zero False Acceptance Rate [FAR] was 45. This paper addresses the computational load reduction in off-line signature verification based o...

  8. Biometric Authentication using Nonparametric Methods

    CERN Document Server

    Sheela, S V; 10.5121/ijcsit.2010.2309

    2010-01-01

    The physiological and behavioral trait is employed to develop biometric authentication systems. The proposed work deals with the authentication of iris and signature based on minimum variance criteria. The iris patterns are preprocessed based on area of the connected components. The segmented image used for authentication consists of the region with large variations in the gray level values. The image region is split into quadtree components. The components with minimum variance are determined from the training samples. Hu moments are applied on the components. The summation of moment values corresponding to minimum variance components are provided as input vector to k-means and fuzzy k-means classifiers. The best performance was obtained for MMU database consisting of 45 subjects. The number of subjects with zero False Rejection Rate [FRR] was 44 and number of subjects with zero False Acceptance Rate [FAR] was 45. This paper addresses the computational load reduction in off-line signature verification based ...

  9. Irish study of high-density Schizophrenia families: Field methods and power to detect linkage

    Energy Technology Data Exchange (ETDEWEB)

    Kendler, K.S.; Straub, R.E.; MacLean, C.J. [Virginia Commonwealth Univ., Richmond, VA (United States)] [and others

    1996-04-09

    Large samples of multiplex pedigrees will probably be needed to detect susceptibility loci for schizophrenia by linkage analysis. Standardized ascertainment of such pedigrees from culturally and ethnically homogeneous populations may improve the probability of detection and replication of linkage. The Irish Study of High-Density Schizophrenia Families (ISHDSF) was formed from standardized ascertainment of multiplex schizophrenia families in 39 psychiatric facilities covering over 90% of the population in Ireland and Northern Ireland. We here describe a phenotypic sample and a subset thereof, the linkage sample. Individuals were included in the phenotypic sample if adequate diagnostic information, based on personal interview and/or hospital record, was available. Only individuals with available DNA were included in the linkage sample. Inclusion of a pedigree into the phenotypic sample required at least two first, second, or third degree relatives with non-affective psychosis (NAP), one of whom had schizophrenia (S) or poor-outcome schizoaffective disorder (PO-SAD). Entry into the linkage sample required DNA samples on at least two individuals with NAP, of whom at least one had S or PO-SAD. Affection was defined by narrow, intermediate, and broad criteria. 75 refs., 6 tabs.

  10. Zenith Pass Problem of Inter-satellite Linkage Antenna Based on Program Guidance Method

    Institute of Scientific and Technical Information of China (English)

    Zhai Kun; Yang Di

    2008-01-01

    While adopting an elevation-over-azimuth architecture by an inter-satellite linkage antenna of a user satellite, a zenith pass problem always occurs when the antenna is tracing the tracking and data relay satellite (TDRS). This paper deals with this problem by way of,firstly, introducing movement laws of the inter-satellite linkage to predict the movement of the user satellite antenna followed by analyzing the potential pass moment and the actual one of the zenith pass in detail. A number of specific orbit altitudes for the user satellite that can remove the blindness zone are obtained. Finally, on the base of the predicted results from the movement laws of the inter-satellite linkage, the zenith pass tracing strategies for the user satellite antenna are designed under the program guidance using a trajectory preprocessor. Simulations have confirmed the reasonability and feasibility of the strategies in dealing with the zenith pass problem.

  11. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers

    Directory of Open Access Journals (Sweden)

    Stochl Jan

    2012-06-01

    Full Text Available Abstract Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1 a cross-sectional health survey (the Scottish Health Education Population Survey and 2 a general population birth cohort study (the National Child Development Study illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items we show that all items from the 12-item General Health Questionnaire (GHQ-12 – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales. An illustration of ordinal item analysis

  12. Non-parametric approach to the study of phenotypic stability.

    Science.gov (United States)

    Ferreira, D F; Fernandes, S B; Bruzi, A T; Ramalho, M A P

    2016-02-19

    The aim of this study was to undertake the theoretical derivations of non-parametric methods, which use linear regressions based on rank order, for stability analyses. These methods were extension different parametric methods used for stability analyses and the result was compared with a standard non-parametric method. Intensive computational methods (e.g., bootstrap and permutation) were applied, and data from the plant-breeding program of the Biology Department of UFLA (Minas Gerais, Brazil) were used to illustrate and compare the tests. The non-parametric stability methods were effective for the evaluation of phenotypic stability. In the presence of variance heterogeneity, the non-parametric methods exhibited greater power of discrimination when determining the phenotypic stability of genotypes.

  13. Mokken scale analysis of mental health and well-being questionnaire item responses: a non-parametric IRT method in empirical research for applied health researchers.

    Science.gov (United States)

    Stochl, Jan; Jones, Peter B; Croudace, Tim J

    2012-06-11

    Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12)--when binary scored--were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech's "well-being" and "distress" clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental

  14. Synthesis method for linkages with conter of mass at invariant link point-Pantograph based mechnisms

    NARCIS (Netherlands)

    Wijk, van der V.; Herder, J.L.

    2012-01-01

    This paper deals with the synthesis of the motion of the center of mass (CoM) of linkages as being a stationary or invariant point at one of its links. This is of importance for the design of inherently shaking force balanced mechanisms, static balancing, and other branches of mechanical synthesis.

  15. Asset Market Linkages in Crisis Periods

    NARCIS (Netherlands)

    P. Hartmann; S. Straetmans; C.G. de Vries (Casper)

    2001-01-01

    textabstractWe characterize asset return linkages during periods of stress by an extremal dependence measure. Contrary to correlation analysis, this non-parametric measure is not predisposed towards the normal distribution and can account for non-linear relationships. Our estimates for the G-5 count

  16. Asset Market Linkages in Crisis Periods

    NARCIS (Netherlands)

    P. Hartmann; S. Straetmans; C.G. de Vries (Casper)

    2001-01-01

    textabstractWe characterize asset return linkages during periods of stress by an extremal dependence measure. Contrary to correlation analysis, this non-parametric measure is not predisposed towards the normal distribution and can account for non-linear relationships. Our estimates for the G-5

  17. A non-parametric method for automatic determination of P-wave and S-wave arrival times: application to local micro earthquakes

    Science.gov (United States)

    Rawles, Christopher; Thurber, Clifford

    2015-08-01

    We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set

  18. A method for detecting recent changes in contemporary effective population size from linkage disequilibrium at linked and unlinked loci.

    Science.gov (United States)

    Hollenbeck, C M; Portnoy, D S; Gold, J R

    2016-10-01

    Estimation of contemporary effective population size (Ne) from linkage disequilibrium (LD) between unlinked pairs of genetic markers has become an important tool in the field of population and conservation genetics. If data pertaining to physical linkage or genomic position are available for genetic markers, estimates of recombination rate between loci can be combined with LD data to estimate contemporary Ne at various times in the past. We extend the well-known, LD-based method of estimating contemporary Ne to include linkage information and show via simulation that even relatively small, recent changes in Ne can be detected reliably with a modest number of single-nucleotide polymorphism (SNP) loci. We explore several issues important to interpretation of the results and quantify the bias in estimates of contemporary Ne associated with the assumption that all loci in a large SNP data set are unlinked. The approach is applied to an empirical data set of SNP genotypes from a population of a marine fish where a recent, temporary decline in Ne is known to have occurred.

  19. Nonparametric Bayes analysis of social science data

    Science.gov (United States)

    Kunihama, Tsuyoshi

    Social science data often contain complex characteristics that standard statistical methods fail to capture. Social surveys assign many questions to respondents, which often consist of mixed-scale variables. Each of the variables can follow a complex distribution outside parametric families and associations among variables may have more complicated structures than standard linear dependence. Therefore, it is not straightforward to develop a statistical model which can approximate structures well in the social science data. In addition, many social surveys have collected data over time and therefore we need to incorporate dynamic dependence into the models. Also, it is standard to observe massive number of missing values in the social science data. To address these challenging problems, this thesis develops flexible nonparametric Bayesian methods for the analysis of social science data. Chapter 1 briefly explains backgrounds and motivations of the projects in the following chapters. Chapter 2 develops a nonparametric Bayesian modeling of temporal dependence in large sparse contingency tables, relying on a probabilistic factorization of the joint pmf. Chapter 3 proposes nonparametric Bayes inference on conditional independence with conditional mutual information used as a measure of the strength of conditional dependence. Chapter 4 proposes a novel Bayesian density estimation method in social surveys with complex designs where there is a gap between sample and population. We correct for the bias by adjusting mixture weights in Bayesian mixture models. Chapter 5 develops a nonparametric model for mixed-scale longitudinal surveys, in which various types of variables can be induced through latent continuous variables and dynamic latent factors lead to flexibly time-varying associations among variables.

  20. Nonparametric Bayesian Modeling for Automated Database Schema Matching

    Energy Technology Data Exchange (ETDEWEB)

    Ferragut, Erik M [ORNL; Laska, Jason A [ORNL

    2015-01-01

    The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.

  1. 基于非参数与L-Moment估计的股市动态极值ES风险测度研究%Measuring Dynamic Extreme ES Risk for Stock Markets Based on Nonparametric and L-Moment Method

    Institute of Scientific and Technical Information of China (English)

    林宇; 谭斌; 黄登仕; 魏宇

    2011-01-01

    This paper applies bandwidth nonparametric method and AR-GARCH to model the conditional mean and conditional volatility for estimating the standardized residuals of conditional returns, and then, L-Moment and MLE are used to estimate parameters of GPD, and estimate dynamic VaR and ES risk. Finally, this paper applies Back-Testing to test the accuracy of VaR and ES measurement model. Our results show that the nonparametric estimation seems superior to GARCH model in accuracy of risk measurement, and that the risk measurement model based on nonparametric estimation and L-moment method can effectively measure dynamic risks of shanghai and Shenzhen stock markets.%通过运用带宽非参数方法、AR-GARCH模型对时间序列的条件均值、条件波动性进行建模估计出标准残差序列,再运用L-Moment与MLE(maximum Likelihood estimation)估计标准残差的尾部的GPD参数,进而运用实验方法测度出风险VaR(value at Risk)及ES(Expected Shortfall),最后运用Back-Testing方法检验测度准确性.结果表明,基于带宽的非参数估计模型比GARCH簇模型在测度ES上具有更高的可靠性:基于非参数模型与L-Moment的风险测度模型能够有效测度沪深股市的动态VaR与ES.

  2. NONPARAMETRIC ESTIMATION OF CHARACTERISTICS OF PROBABILITY DISTRIBUTIONS

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-10-01

    Full Text Available The article is devoted to the nonparametric point and interval estimation of the characteristics of the probabilistic distribution (the expectation, median, variance, standard deviation, variation coefficient of the sample results. Sample values are regarded as the implementation of independent and identically distributed random variables with an arbitrary distribution function having the desired number of moments. Nonparametric analysis procedures are compared with the parametric procedures, based on the assumption that the sample values have a normal distribution. Point estimators are constructed in the obvious way - using sample analogs of the theoretical characteristics. Interval estimators are based on asymptotic normality of sample moments and functions from them. Nonparametric asymptotic confidence intervals are obtained through the use of special output technology of the asymptotic relations of Applied Statistics. In the first step this technology uses the multidimensional central limit theorem, applied to the sums of vectors whose coordinates are the degrees of initial random variables. The second step is the conversion limit multivariate normal vector to obtain the interest of researcher vector. At the same considerations we have used linearization and discarded infinitesimal quantities. The third step - a rigorous justification of the results on the asymptotic standard for mathematical and statistical reasoning level. It is usually necessary to use the necessary and sufficient conditions for the inheritance of convergence. This article contains 10 numerical examples. Initial data - information about an operating time of 50 cutting tools to the limit state. Using the methods developed on the assumption of normal distribution, it can lead to noticeably distorted conclusions in a situation where the normality hypothesis failed. Practical recommendations are: for the analysis of real data we should use nonparametric confidence limits

  3. Glaucoma Monitoring in a Clinical Setting Glaucoma Progression Analysis vs Nonparametric Progression Analysis in the Groningen Longitudinal Glaucoma Study

    NARCIS (Netherlands)

    Wesselink, Christiaan; Heeg, Govert P.; Jansonius, Nomdo M.

    Objective: To compare prospectively 2 perimetric progression detection algorithms for glaucoma, the Early Manifest Glaucoma Trial algorithm (glaucoma progression analysis [GPA]) and a nonparametric algorithm applied to the mean deviation (MD) (nonparametric progression analysis [NPA]). Methods:

  4. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are pointed...... out, and methods to prevent bias are presented. The techniques are evaluated by comparing their speed and accuracy on the simple case of estimating auto-correlation functions for the response of a single degree-of-freedom system loaded with white noise....

  5. The phenotypic difference discards sib-pair QTL linkage information

    Energy Technology Data Exchange (ETDEWEB)

    Wright, F.A. [Univ. of California, San Diego, CA (United States)]|[Univ. of Texas, El Paso, TX (United States)

    1997-03-01

    Kruglyak and Lander provide an important synthesis of methods for (IBD) sib-pair linkage mapping, with an emphasis on the use of complete multipoint inheritance information for each sib pair. These procedures are implemented in the computer program MAPMAKER/SIBS, which performs interval mapping for dichotomous and quantitative traits. The authors present three methods for mapping quantitative trait loci (QTLs): a variant of the commonly used Haseman-Elston regression approach, a maximum-likelihood procedure involving variance components, and a rank-based nonparametric procedure. These approaches and related work use the magnitude of the difference in the sibling phenotype values for each sib pair as the observation for analysis. Linkage is detected if siblings sharing more alleles IBD have similar phenotypes (i.e., a small difference in the phenotype values), while siblings sharing fewer alleles IBD have less similar phenotypes. Such techniques have been used to detect linkage for a number of quantitative traits. However, the exclusive reliance on the phenotypic differences may be due in large part to historical inertia. A likelihood argument is presented here to show that, under certain classical assumptions, the phenotypic differences do not contain the full likelihood information for QTL mapping. Furthermore, considerable gains in power to detect linkage can be achieved with an expanded likelihood model. The development here is related to previous work, which incorporates the full set of phenotypic data using likelihood and robust quasi-likelihood methods. The purpose of this letter is not to endorse a particular approach but to spur research in alternative and perhaps more powerful linkage tests. 17 refs.

  6. Non-Parametric Inference in Astrophysics

    CERN Document Server

    Wasserman, L H; Nichol, R C; Genovese, C; Jang, W; Connolly, A J; Moore, A W; Schneider, J; Wasserman, Larry; Miller, Christopher J.; Nichol, Robert C.; Genovese, Chris; Jang, Woncheol; Connolly, Andrew J.; Moore, Andrew W.; Schneider, Jeff; group, the PICA

    2001-01-01

    We discuss non-parametric density estimation and regression for astrophysics problems. In particular, we show how to compute non-parametric confidence intervals for the location and size of peaks of a function. We illustrate these ideas with recent data on the Cosmic Microwave Background. We also briefly discuss non-parametric Bayesian inference.

  7. A novel non-parametric method for uncertainty evaluation of correlation-based molecular signatures: its application on PAM50 algorithm.

    Science.gov (United States)

    Fresno, Cristóbal; González, Germán Alexis; Merino, Gabriela Alejandra; Flesia, Ana Georgina; Podhajcer, Osvaldo Luis; Llera, Andrea Sabina; Fernández, Elmer Andrés

    2017-03-01

    The PAM50 classifier is used to assign patients to the highest correlated breast cancer subtype irrespectively of the obtained value. Nonetheless, all subtype correlations are required to build the risk of recurrence (ROR) score, currently used in therapeutic decisions. Present subtype uncertainty estimations are not accurate, seldom considered or require a population-based approach for this context. Here we present a novel single-subject non-parametric uncertainty estimation based on PAM50's gene label permutations. Simulations results ( n  = 5228) showed that only 61% subjects can be reliably 'Assigned' to the PAM50 subtype, whereas 33% should be 'Not Assigned' (NA), leaving the rest to tight 'Ambiguous' correlations between subtypes. The NA subjects exclusion from the analysis improved survival subtype curves discrimination yielding a higher proportion of low and high ROR values. Conversely, all NA subjects showed similar survival behaviour regardless of the original PAM50 assignment. We propose to incorporate our PAM50 uncertainty estimation to support therapeutic decisions. Source code can be found in 'pbcmc' R package at Bioconductor. cristobalfresno@gmail.com or efernandez@bdmg.com.ar. Supplementary data are available at Bioinformatics online.

  8. Lottery spending: a non-parametric analysis.

    Science.gov (United States)

    Garibaldi, Skip; Frisoli, Kayla; Ke, Li; Lim, Melody

    2015-01-01

    We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.

  9. Lottery spending: a non-parametric analysis.

    Directory of Open Access Journals (Sweden)

    Skip Garibaldi

    Full Text Available We analyze the spending of individuals in the United States on lottery tickets in an average month, as reported in surveys. We view these surveys as sampling from an unknown distribution, and we use non-parametric methods to compare properties of this distribution for various demographic groups, as well as claims that some properties of this distribution are constant across surveys. We find that the observed higher spending by Hispanic lottery players can be attributed to differences in education levels, and we dispute previous claims that the top 10% of lottery players consistently account for 50% of lottery sales.

  10. Nonparametric inferences for kurtosis and conditional kurtosis

    Institute of Scientific and Technical Information of China (English)

    XIE Xiao-heng; HE You-hua

    2009-01-01

    Under the assumption of strictly stationary process, this paper proposes a nonparametric model to test the kurtosis and conditional kurtosis for risk time series. We apply this method to the daily returns of S&P500 index and the Shanghai Composite Index, and simulate GARCH data for verifying the efficiency of the presented model. Our results indicate that the risk series distribution is heavily tailed, but the historical information can make its future distribution light-tailed. However the far future distribution's tails are little affected by the historical data.

  11. Genomewide linkage scan for myopia susceptibility loci among Ashkenazi Jewish families shows evidence of linkage on chromosome 22q12.

    Science.gov (United States)

    Stambolian, Dwight; Ibay, Grace; Reider, Lauren; Dana, Debra; Moy, Chris; Schlifka, Melissa; Holmes, Taura; Ciner, Elise; Bailey-Wilson, Joan E

    2004-09-01

    Mild/moderate (common) myopia is a very common disorder, with both genetic and environmental influences. The environmental factors are related to near work and can be measured. There are no known genetic loci for common myopia. Our goal is to find evidence for a myopia susceptibility gene causing common myopia. Cycloplegic and manifest refraction were performed on 44 large American families of Ashkenazi Jewish descent, each with at least two affected siblings. Individuals with at least -1.00 diopter or lower in each meridian of both eyes were classified as myopic. Microsatellite genotyping with 387 markers was performed by the Center for Inherited Disease Research. Linkage analyses were conducted with parametric and nonparametric methods by use of 12 different penetrance models. The family-based association test was used for an association scan. A maximum multipoint parametric heterogeneity LOD (HLOD) score of 3.54 was observed at marker D22S685, and nonparametric linkage analyses gave consistent results, with a P value of.0002 at this marker. The parametric multipoint HLOD scores exceeded 3.0 for a 4-cM interval, and significant evidence of genetic heterogeneity was observed. This genomewide scan is the first step toward identifying a gene on chromosome 22 with an influence on common myopia. At present, we are following up our linkage results on chromosome 22 with a dense map of >1,500 single-nucleotide-polymorphism markers for fine mapping and association analyses. Identification of a susceptibility locus in this region may eventually lead to a better understanding of gene-environment interactions in the causation of this complex trait.

  12. 基于工业控制模型的非参数CUSUM入侵检测方法%A non-parametric CUSUM intrusion detection method based on industrial control model

    Institute of Scientific and Technical Information of China (English)

    张云贵; 赵华; 王丽娜

    2012-01-01

    To deal with the rising serious information security problem of the industrial control system (ICS) , this paper presents an intrusion detection method of the non-parametric cumulative sum (CUSUM) for industrial control network. Using the output-input dependent characteristics of the ICS, a mathematical model of the ICS is established to predict the output of the system. Once the sensors of the control system are under attack, the actual output will change. At every moment, the difference between the predicted output of the industrial control model and the measured signal by the sensors is calculated, and then the time-based statistical sequence is formed. By the non-parametric CUSUM algorithm, the online detection of the intrusion attacks is implemented and alarmed. The simulated detection experiments show that the proposed method has a good real-time and low false alarm rate. By choosing appropriate parameters r and β of the non-parametric CUSUM algorithm, the intrusion detection method can accurately detect the attacks before substantial damage to the control system and it is also helpful to monitor the misoperation.%为解决日趋严重的工业控制系统(industrial control system,ICS)信息安全问题,提出一种针对工业控制网络的非参数累积和( cumulative sum,CUSUM)入侵检测方法.利用ICS输入决定输出的特性,建立ICS的数学模型预测系统的输出,一旦控制系统的传感器遭受攻击,实际输出信号将发生改变.在每个时刻,计算工业控制模型的预测输出与传感器测量信号的差值,形成基于时间的统计序列,采用非参数CUSUM算法,实现在线检测入侵并报警.仿真检测实验证明,该方法具有良好的实时性和低误报率.选择适当的非参数CUSUM算法参数T和β,该入侵检测方法不但能在攻击对控制系统造成实质伤害前检测出攻击,还对监测ICS中的误操作有一定帮助.

  13. Reclink: aplicativo para o relacionamento de bases de dados, implementando o método probabilistic record linkage Reclink: an application for database linkage implementing the probabilistic record linkage method

    Directory of Open Access Journals (Sweden)

    Kenneth R. de Camargo Jr.

    2000-06-01

    Full Text Available Apresenta-se um sistema de relacionamento de bases de dados fundamentado na técnica de relacionamento probabilístico de registros, desenvolvido na linguagem C++ com o ambiente de programação Borland C++ Builder versão 3.0. O sistema foi testado a partir de fontes de dados de diferentes tamanhos, tendo sido avaliado em tempo de processamento e sensibilidade para a identificação de pares verdadeiros. O tempo gasto com o processamento dos registros foi menor quando se empregou o programa do que ao ser realizado manualmente, em especial, quando envolveram bases de maior tamanho. As sensibilidades do processo manual e do processo automático foram equivalentes quando utilizaram bases com menor número de registros; entretanto, à medida que as bases aumentaram, percebeu-se tendência de diminuição na sensibilidade apenas no processo manual. Ainda que em fase inicial de desenvolvimento, o sistema apresentou boa performance tanto em velocidade quanto em sensibilidade. Embora a performance dos algoritmos utilizados tenha sido satisfatória, o objetivo é avaliar outras rotinas, buscando aprimorar o desempenho do sistema.This paper presents a system for database linkage based on the probabilistic record linkage technique, developed in the C++ language with the Borland C++ Builder version 3.0 programming environment. The system was tested in the linkage of data sources of different sizes, evaluated both in terms of processing time and sensitivity for identifying true record pairs. Significantly less time was spent in record processing when the program was used, as compared to manual processing, especially in situations where larger databases were used. Manual and automatic processes had equivalent sensitivities in situations where we used databases with fewer records. However, as the number of records grew we noticed a clear reduction in the sensitivity of the manual process, but not in the automatic one. Although in its initial stage of

  14. Methods for attaching polymerizable ceragenins to water treatment membranes using amine and amide linkages

    Science.gov (United States)

    Hibbs, Michael; Altman, Susan J.; Jones, Howland D.T.; Savage, Paul B.

    2013-10-15

    This invention relates to methods for chemically grafting and attaching ceragenin molecules to polymer substrates; methods for synthesizing ceragenin-containing copolymers; methods for making ceragenin-modified water treatment membranes and spacers; and methods of treating contaminated water using ceragenin-modified treatment membranes and spacers. Ceragenins are synthetically produced antimicrobial peptide mimics that display broad-spectrum bactericidal activity. Alkene-functionalized ceragenins (e.g., acrylamide-functionalized ceragenins) can be attached to polyamide reverse osmosis membranes using amine-linking, amide-linking, UV-grafting, or silane-coating methods. In addition, silane-functionalized ceragenins can be directly attached to polymer surfaces that have free hydroxyls.

  15. Methods for attaching polymerizable ceragenins to water treatment membranes using amine and amide linkages

    Energy Technology Data Exchange (ETDEWEB)

    Hibbs, Michael; Altman, Susan J.; Jones, Howland D.T.; Savage, Paul B.

    2013-10-15

    This invention relates to methods for chemically grafting and attaching ceragenin molecules to polymer substrates; methods for synthesizing ceragenin-containing copolymers; methods for making ceragenin-modified water treatment membranes and spacers; and methods of treating contaminated water using ceragenin-modified treatment membranes and spacers. Ceragenins are synthetically produced antimicrobial peptide mimics that display broad-spectrum bactericidal activity. Alkene-functionalized ceragenins (e.g., acrylamide-functionalized ceragenins) can be attached to polyamide reverse osmosis membranes using amine-linking, amide-linking, UV-grafting, or silane-coating methods. In addition, silane-functionalized ceragenins can be directly attached to polymer surfaces that have free hydroxyls.

  16. Methods for attaching polymerizable ceragenins to water treatment membranes using silane linkages

    Energy Technology Data Exchange (ETDEWEB)

    Hibbs, Michael; Altman, Susan J.; Jones, Howland D. T.; Savage, Paul B.

    2013-09-10

    This invention relates to methods for chemically grafting and attaching ceragenin molecules to polymer substrates; methods for synthesizing ceragenin-containing copolymers; methods for making ceragenin-modified water treatment membranes and spacers; and methods of treating contaminated water using ceragenin-modified treatment membranes and spacers. Ceragenins are synthetically produced antimicrobial peptide mimics that display broad-spectrum bactericidal activity. Alkene-functionalized ceragenins (e.g., acrylamide-functionalized ceragenins) can be attached to polyamide reverse osmosis membranes using amine-linking, amide-linking, UV-grafting, or silane-coating methods. In addition, silane-functionalized ceragenins can be directly attached to polymer surfaces that have free hydroxyls.

  17. Methods for attaching polymerizable ceragenins to water treatment membranes using silane linkages

    Energy Technology Data Exchange (ETDEWEB)

    Hibbs, Michael; Altman, Susan J.; Jones, Howland D. T.; Savage, Paul B.

    2013-09-10

    This invention relates to methods for chemically grafting and attaching ceragenin molecules to polymer substrates; methods for synthesizing ceragenin-containing copolymers; methods for making ceragenin-modified water treatment membranes and spacers; and methods of treating contaminated water using ceragenin-modified treatment membranes and spacers. Ceragenins are synthetically produced antimicrobial peptide mimics that display broad-spectrum bactericidal activity. Alkene-functionalized ceragenins (e.g., acrylamide-functionalized ceragenins) can be attached to polyamide reverse osmosis membranes using amine-linking, amide-linking, UV-grafting, or silane-coating methods. In addition, silane-functionalized ceragenins can be directly attached to polymer surfaces that have free hydroxyls.

  18. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn

    2010-01-01

    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  19. Interactive Record Linkage

    Directory of Open Access Journals (Sweden)

    2000-12-01

    Full Text Available In order to carry out demographic analyses at individual and group levels, a manual method of linking individual event records from parish registers was developed in the late 1950s. In order to save time and to work with larger areas than small parishes, systems for automatic record linkage were developed a couple of decades later. A third method, an interactive record linkage, named Demolink, has been developed even more recently. The main new feature of the method is the possibility of linking from more than two historical sources simultaneously. This improves the process of sorting out which events belong to which individual life courses. This paper discusses how Demolink was used for record linkage in a large Norwegian parish for the period 1801-1878.

  20. A Novel Method to Magnetic Flux Linkage Optimization of Direct-Driven Surface-Mounted Permanent Magnet Synchronous Generator Based on Nonlinear Dynamic Analysis

    Directory of Open Access Journals (Sweden)

    Qian Xie

    2016-07-01

    Full Text Available This paper pays attention to magnetic flux linkage optimization of a direct-driven surface-mounted permanent magnet synchronous generator (D-SPMSG. A new compact representation of the D-SPMSG nonlinear dynamic differential equations to reduce system parameters is established. Furthermore, the nonlinear dynamic characteristics of new D-SPMSG equations in the process of varying magnetic flux linkage are considered, which are illustrated by Lyapunov exponent spectrums, phase orbits, Poincaré maps, time waveforms and bifurcation diagrams, and the magnetic flux linkage stable region of D-SPMSG is acquired concurrently. Based on the above modeling and analyses, a novel method of magnetic flux linkage optimization is presented. In addition, a 2 MW D-SPMSG 2D/3D model is designed by ANSYS software according to the practical design requirements. Finally, five cases of D-SPMSG models with different magnetic flux linkages are simulated by using the finite element analysis (FEA method. The nephograms of magnetic flux density are agreement with theoretical analysis, which both confirm the correctness and effectiveness of the proposed approach.

  1. A Formalization of Linkage Analysis

    DEFF Research Database (Denmark)

    Ingolfsdottir, Anna; Christensen, A.I.; Hansen, Jens A.

    In this report a formalization of genetic linkage analysis is introduced. Linkage analysis is a computationally hard biomathematical method, which purpose is to locate genes on the human genome. It is rooted in the new area of bioinformatics and no formalization of the method has previously been ...

  2. A Formalization of Linkage Analysis

    DEFF Research Database (Denmark)

    Ingolfsdottir, Anna; Christensen, A.I.; Hansen, Jens A.

    In this report a formalization of genetic linkage analysis is introduced. Linkage analysis is a computationally hard biomathematical method, which purpose is to locate genes on the human genome. It is rooted in the new area of bioinformatics and no formalization of the method has previously been...

  3. A generalization of Kempe's linkages

    Institute of Scientific and Technical Information of China (English)

    MAO De-can; LUO Yao-zhi; YOU Zhong

    2007-01-01

    A new, general type of planar linkages is presented, which extends the classical linkages developed by Kempe consisting of two single-looped kinematic chains of linkages, interconnected by revolute hinges. Together with a locking device, these new linkages have only one degree of freedom (DOF), which makes them ideal for serving as deployable structures for different purposes. Here, we start with a fresh matrix method of analysis for double-loop planar linkages, using 2D transformation matrices and a new symbolic notation. Further inspection for one case of Kempe's linkages is provided. Basing on the inspection, by means of some novel algebraic and geometric techniques, one particularly fascinating solution was found. Physical models were built to show that the derivation in this paper is valid and the new mechanisms are correct.

  4. Examination of Quantitative Methods Used in Early Intervention Research: Linkages with Recommended Practices.

    Science.gov (United States)

    Snyder, Patricia; Thompson, Bruce; McLean, Mary E.; Smith, Barbara J.

    2002-01-01

    Findings are reported related to the research methods and statistical techniques used in 450 group quantitative studies examined by the Council for Exceptional Children's Division for Early Childhood Recommended Practices Project. Studies were analyzed across seven dimensions including sampling procedures, variable selection, variable definition,…

  5. A contingency table approach to nonparametric testing

    CERN Document Server

    Rayner, JCW

    2000-01-01

    Most texts on nonparametric techniques concentrate on location and linear-linear (correlation) tests, with less emphasis on dispersion effects and linear-quadratic tests. Tests for higher moment effects are virtually ignored. Using a fresh approach, A Contingency Table Approach to Nonparametric Testing unifies and extends the popular, standard tests by linking them to tests based on models for data that can be presented in contingency tables.This approach unifies popular nonparametric statistical inference and makes the traditional, most commonly performed nonparametric analyses much more comp

  6. Candidate gene linkage approach to identify DNA variants that predispose to preterm birth

    DEFF Research Database (Denmark)

    Bream, Elise N A; Leppellere, Cara R; Cooper, Margaret E

    2013-01-01

    Background:The aim of this study was to identify genetic variants contributing to preterm birth (PTB) using a linkage candidate gene approach.Methods:We studied 99 single-nucleotide polymorphisms (SNPs) for 33 genes in 257 families with PTBs segregating. Nonparametric and parametric analyses were...... used. Premature infants and mothers of premature infants were defined as affected cases in independent analyses.Results:Analyses with the infant as the case identified two genes with evidence of linkage: CRHR1 (P = 0.0012) and CYP2E1 (P = 0.0011). Analyses with the mother as the case identified four...... through the infant and/or the mother in the etiology of PTB....

  7. Nonparametric dark energy reconstruction from supernova data.

    Science.gov (United States)

    Holsclaw, Tracy; Alam, Ujjaini; Sansó, Bruno; Lee, Herbert; Heitmann, Katrin; Habib, Salman; Higdon, David

    2010-12-10

    Understanding the origin of the accelerated expansion of the Universe poses one of the greatest challenges in physics today. Lacking a compelling fundamental theory to test, observational efforts are targeted at a better characterization of the underlying cause. If a new form of mass-energy, dark energy, is driving the acceleration, the redshift evolution of the equation of state parameter w(z) will hold essential clues as to its origin. To best exploit data from observations it is necessary to develop a robust and accurate reconstruction approach, with controlled errors, for w(z). We introduce a new, nonparametric method for solving the associated statistical inverse problem based on Gaussian process modeling and Markov chain Monte Carlo sampling. Applying this method to recent supernova measurements, we reconstruct the continuous history of w out to redshift z=1.5.

  8. Nonparametric Maximum Entropy Estimation on Information Diagrams

    CERN Document Server

    Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn

    2016-01-01

    Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...

  9. Nonparametric estimation of employee stock options

    Institute of Scientific and Technical Information of China (English)

    FU Qiang; LIU Li-an; LIU Qian

    2006-01-01

    We proposed a new model to price employee stock options (ESOs). The model is based on nonparametric statistical methods with market data. It incorporates the kernel estimator and employs a three-step method to modify BlackScholes formula. The model overcomes the limits of Black-Scholes formula in handling option prices with varied volatility. It disposes the effects of ESOs self-characteristics such as non-tradability, the longer term for expiration, the early exercise feature, the restriction on shorting selling and the employee's risk aversion on risk neutral pricing condition, and can be applied to ESOs valuation with the explanatory variable in no matter the certainty case or random case.

  10. A non-parametric model for the cosmic velocity field

    NARCIS (Netherlands)

    Branchini, E; Teodoro, L; Frenk, CS; Schmoldt, [No Value; Efstathiou, G; White, SDM; Saunders, W; Sutherland, W; Rowan-Robinson, M; Keeble, O; Tadros, H; Maddox, S; Oliver, S

    1999-01-01

    We present a self-consistent non-parametric model of the local cosmic velocity field derived from the distribution of IRAS galaxies in the PSCz redshift survey. The survey has been analysed using two independent methods, both based on the assumptions of gravitational instability and linear biasing.

  11. A COMPARISON BETWEEN SINGLE LINKAGE AND COMPLETE LINKAGE IN AGGLOMERATIVE HIERARCHICAL CLUSTER ANALYSIS FOR IDENTIFYING TOURISTS SEGMENTS

    OpenAIRE

    Noor Rashidah Rashid

    2012-01-01

    Cluster Analysis is a multivariate method in statistics. Agglomerative Hierarchical Cluster Analysis is one of approaches in Cluster Analysis. There are two linkage methods in Agglomerative Hierarchical Cluster Analysis which are Single Linkage and Complete Linkage. The purpose of this study is to compare between Single Linkage and Complete Linkage in Agglomerative Hierarchical Cluster Analysis. The comparison of performances between these linkage methods was shown by using Kruskal-Wallis tes...

  12. Nonparametric inference of network structure and dynamics

    Science.gov (United States)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  13. Robust physical methods that enrich genomic regions identical by descent for linkage studies: confirmation of a locus for osteogenesis imperfecta

    Directory of Open Access Journals (Sweden)

    Cohen Nadine

    2009-03-01

    Full Text Available Abstract Background The monogenic disease osteogenesis imperfecta (OI is due to single mutations in either of the collagen genes ColA1 or ColA2, but within the same family a given mutation is accompanied by a wide range of disease severity. Although this phenotypic variability implies the existence of modifier gene variants, genome wide scanning of DNA from OI patients has not been reported. Promising genome wide marker-independent physical methods for identifying disease-related loci have lacked robustness for widespread applicability. Therefore we sought to improve these methods and demonstrate their performance to identify known and novel loci relevant to OI. Results We have improved methods for enriching regions of identity-by-descent (IBD shared between related, afflicted individuals. The extent of enrichment exceeds 10- to 50-fold for some loci. The efficiency of the new process is shown by confirmation of the identification of the Col1A2 locus in osteogenesis imperfecta patients from Amish families. Moreover the analysis revealed additional candidate linkage loci that may harbour modifier genes for OI; a locus on chromosome 1q includes COX-2, a gene implicated in osteogenesis. Conclusion Technology for physical enrichment of IBD loci is now robust and applicable for finding genes for monogenic diseases and genes for complex diseases. The data support the further investigation of genetic loci other than collagen gene loci to identify genes affecting the clinical expression of osteogenesis imperfecta. The discrimination of IBD mapping will be enhanced when the IBD enrichment procedure is coupled with deep resequencing.

  14. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  15. Nonparametric Regression with Common Shocks

    Directory of Open Access Journals (Sweden)

    Eduardo A. Souza-Rodrigues

    2016-09-01

    Full Text Available This paper considers a nonparametric regression model for cross-sectional data in the presence of common shocks. Common shocks are allowed to be very general in nature; they do not need to be finite dimensional with a known (small number of factors. I investigate the properties of the Nadaraya-Watson kernel estimator and determine how general the common shocks can be while still obtaining meaningful kernel estimates. Restrictions on the common shocks are necessary because kernel estimators typically manipulate conditional densities, and conditional densities do not necessarily exist in the present case. By appealing to disintegration theory, I provide sufficient conditions for the existence of such conditional densities and show that the estimator converges in probability to the Kolmogorov conditional expectation given the sigma-field generated by the common shocks. I also establish the rate of convergence and the asymptotic distribution of the kernel estimator.

  16. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....

  17. An asymptotically optimal nonparametric adaptive controller

    Institute of Scientific and Technical Information of China (English)

    郭雷; 谢亮亮

    2000-01-01

    For discrete-time nonlinear stochastic systems with unknown nonparametric structure, a kernel estimation-based nonparametric adaptive controller is constructed based on truncated certainty equivalence principle. Global stability and asymptotic optimality of the closed-loop systems are established without resorting to any external excitations.

  18. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  19. Genome scan for linkage to Gilles de la Tourette syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Barr, C.L.; Livingston, J.; Williamson, R. [and others

    1994-09-01

    Gilles de la Tourette Syndrome (TS) is a familial, neuropsychiatric disorder characterized by chronic, intermittent motor and vocal tics. In addition to tics, affected individuals frequently display symptoms such as attention-deficit hyperactivity disorder and/or obsessive compulsive disorder. Genetic analyses of family data have suggested that susceptibility to the disorder is most likely due to a single genetic locus with a dominant mode of transmission and reduced penetrance. In the search for genetic linkage for TS, we have collected well-characterized pedigrees with multiple affected individuals on whom extensive diagnostic evaluations have been done. The first stage of our study is to scan the genome systematically using a panel of uniformly spaced (10 to 20 cM), highly polymorphic, microsatellite markers on 5 families segregating TS. To date, 290 markers have been typed and 3,660 non-overlapping cM of the genome have been excluded for possible linkage under the assumption of genetic homogeneity. Because of the possibility of locus heterogeneity overall summed exclusion is not considered tantamount to absolute exclusion of a disease locus in that region. The results from each family are carefully evaluated and a positive lod score in a single family is followed up by typing closely linked markers. Linkage to TS was examined by two-point analysis using the following genetic model: single autosomal dominant gene with gene frequency .003 and maximum penetrance of .99. An age-of-onset correction is included using a linear function increasing from age 2 years to 21 years. A small rate of phenocopies is also incorporated into the model. Only individuals with TS or CMT according to DSM III-R criteria were regarded as affected for the purposes of this summary. Additional markers are being tested to provide coverage at 5 cM intervals. Moreover, we are currently analyzing the data non-parametrically using the Affected-Pedigree-Member Method of linkage analysis.

  20. Non-Parametric Statistical Methods and Data Transformations in Agricultural Pest Population Studies Métodos Estadísticos no Paramétricos y Transformaciones de Datos en Estudios de Poblaciones de Plagas Agrícolas

    Directory of Open Access Journals (Sweden)

    Alcides Cabrera Campos

    2012-09-01

    Full Text Available Analyzing data from agricultural pest populations regularly detects that they do not fulfill the theoretical requirements to implement classical ANOVA. Box-Cox transformations and nonparametric statistical methods are commonly used as alternatives to solve this problem. In this paper, we describe the results of applying these techniques to data from Thrips palmi Karny sampled in potato (Solanum tuberosum L. plantations. The X² test was used for the goodness-of-fit of negative binomial distribution and as a test of independence to investigate the relationship between plant strata and insect stages. Seven data transformations were also applied to meet the requirements of classical ANOVA, which failed to eliminate the relationship between mean and variance. Given this negative result, comparisons between insect population densities were made using the nonparametric Kruskal-Wallis ANOVA test. Results from this analysis allowed selecting the insect larval stage and plant middle stratum as keys to design pest sampling plans.Al analizar datos provenientes de poblaciones de plagas agrícolas, regularmente se detecta que no cumplen los requerimientos teóricos para la aplicación del ANDEVA clásico. El uso de transformaciones Box-Cox y de métodos estadísticos no paramétricos resulta la alternativa más utilizada para resolver este inconveniente. En el presente trabajo se exponen los resultados de la aplicación de estas técnicas a datos provenientes de Thrips palmi Karny muestreadas en plantaciones de papa (Solanum tuberosum L. en el período de incidencia de la plaga. Se utilizó la dócima X² para la bondad de ajuste a la distribución binomial negativa y de independencia para investigar la relación entre los estratos de las plantas y los estados del insecto, se aplicaron siete transformaciones a los datos para satisfacer el cumplimiento de los supuestos básicos del ANDEVA, con las cuales no se logró eliminar la relación entre la media y la

  1. Nonparametric Estimation of Mean and Variance and Pricing of Securities Nonparametric Estimation of Mean and Variance and Pricing of Sec

    Directory of Open Access Journals (Sweden)

    Akhtar R. Siddique

    2000-03-01

    Full Text Available This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices. This paper develops a filtering-based framework of non-parametric estimation of parameters of a diffusion process from the conditional moments of discrete observations of the process. This method is implemented for interest rate data in the Eurodollar and long term bond markets. The resulting estimates are then used to form non-parametric univariate and bivariate interest rate models and compute prices for the short term Eurodollar interest rate futures options and long term discount bonds. The bivariate model produces prices substantially closer to the market prices.

  2. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  3. Linkage studies of bipolar disorder with chromosome 18 markers.

    Science.gov (United States)

    Bowen, T; Kirov, G; Gill, M; Spurlock, G; Vallada, H P; Murray, R M; McGuffin, P; Collier, D A; Owen, M J; Craddock, N

    1999-10-15

    Evidence consistent with the existence of genetic linkage between bipolar disorder and three regions on chromosome 18, the pericentromeric region, 18q21, and 18q22-q23 have been reported. Some analyses indicated greater evidence for linkage in pedigrees in which paternal transmission of disease occurs. We have undertaken linkage analyses using 12 highly polymorphic markers spanning these three regions of interest in a sample of 48 U.K. bipolar pedigrees. The sample comprises predominantly nuclear families and includes 118 subjects with Diagnostic and Statistical Manual of Mental Disorders (DSM IV) bipolar I disorder and 147 subjects with broadly defined phenotype. Our data do not provide support for linkage using either parametric or nonparametric analyses. Evidence for linkage was not significantly increased by analyses that allowed for heterogeneity nor by analysing the subset of pedigrees consistent with paternal transmission.

  4. ANALYSIS OF TIED DATA: AN ALTERNATIVE NON-PARAMETRIC APPROACH

    Directory of Open Access Journals (Sweden)

    I. C. A. OYEKA

    2012-02-01

    Full Text Available This paper presents a non-parametric statistical method of analyzing two-sample data that makes provision for the possibility of ties in the data. A test statistic is developed and shown to be free of the effect of any possible ties in the data. An illustrative example is provided and the method is shown to compare favourably with its competitor; the Mann-Whitney test and is more powerful than the latter when there are ties.

  5. Nonparametric estimation for hazard rate monotonously decreasing system

    Institute of Scientific and Technical Information of China (English)

    Han Fengyan; Li Weisong

    2005-01-01

    Estimation of density and hazard rate is very important to the reliability analysis of a system. In order to estimate the density and hazard rate of a hazard rate monotonously decreasing system, a new nonparametric estimator is put forward. The estimator is based on the kernel function method and optimum algorithm. Numerical experiment shows that the method is accurate enough and can be used in many cases.

  6. Bayesian nonparametric duration model with censorship

    Directory of Open Access Journals (Sweden)

    Joseph Hakizamungu

    2007-10-01

    Full Text Available This paper is concerned with nonparametric i.i.d. durations models censored observations and we establish by a simple and unified approach the general structure of a bayesian nonparametric estimator for a survival function S. For Dirichlet prior distributions, we describe completely the structure of the posterior distribution of the survival function. These results are essentially supported by prior and posterior independence properties.

  7. Bootstrap Estimation for Nonparametric Efficiency Estimates

    OpenAIRE

    1995-01-01

    This paper develops a consistent bootstrap estimation procedure to obtain confidence intervals for nonparametric measures of productive efficiency. Although the methodology is illustrated in terms of technical efficiency measured by output distance functions, the technique can be easily extended to other consistent nonparametric frontier models. Variation in estimated efficiency scores is assumed to result from variation in empirical approximations to the true boundary of the production set. ...

  8. Nonparametric estimation of Fisher information from real data

    Science.gov (United States)

    Har-Shemesh, Omri; Quax, Rick; Miñano, Borja; Hoekstra, Alfons G.; Sloot, Peter M. A.

    2016-02-01

    The Fisher information matrix (FIM) is a widely used measure for applications including statistical inference, information geometry, experiment design, and the study of criticality in biological systems. The FIM is defined for a parametric family of probability distributions and its estimation from data follows one of two paths: either the distribution is assumed to be known and the parameters are estimated from the data or the parameters are known and the distribution is estimated from the data. We consider the latter case which is applicable, for example, to experiments where the parameters are controlled by the experimenter and a complicated relation exists between the input parameters and the resulting distribution of the data. Since we assume that the distribution is unknown, we use a nonparametric density estimation on the data and then compute the FIM directly from that estimate using a finite-difference approximation to estimate the derivatives in its definition. The accuracy of the estimate depends on both the method of nonparametric estimation and the difference Δ θ between the densities used in the finite-difference formula. We develop an approach for choosing the optimal parameter difference Δ θ based on large deviations theory and compare two nonparametric density estimation methods, the Gaussian kernel density estimator and a novel density estimation using field theory method. We also compare these two methods to a recently published approach that circumvents the need for density estimation by estimating a nonparametric f divergence and using it to approximate the FIM. We use the Fisher information of the normal distribution to validate our method and as a more involved example we compute the temperature component of the FIM in the two-dimensional Ising model and show that it obeys the expected relation to the heat capacity and therefore peaks at the phase transition at the correct critical temperature.

  9. Non-parametric estimation of Fisher information from real data

    CERN Document Server

    Shemesh, Omri Har; Miñano, Borja; Hoekstra, Alfons G; Sloot, Peter M A

    2015-01-01

    The Fisher Information matrix is a widely used measure for applications ranging from statistical inference, information geometry, experiment design, to the study of criticality in biological systems. Yet there is no commonly accepted non-parametric algorithm to estimate it from real data. In this rapid communication we show how to accurately estimate the Fisher information in a nonparametric way. We also develop a numerical procedure to minimize the errors by choosing the interval of the finite difference scheme necessary to compute the derivatives in the definition of the Fisher information. Our method uses the recently published "Density Estimation using Field Theory" algorithm to compute the probability density functions for continuous densities. We use the Fisher information of the normal distribution to validate our method and as an example we compute the temperature component of the Fisher Information Matrix in the two dimensional Ising model and show that it obeys the expected relation to the heat capa...

  10. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases......A two-step estimation method of stochastic volatility models is proposed: In the first step, we nonparametrically estimate the (unobserved) instantaneous volatility process. In the second step, standard estimation methods for fully observed diffusion processes are employed, but with the filtered...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  11. Nonparametric Regression Estimation for Multivariate Null Recurrent Processes

    Directory of Open Access Journals (Sweden)

    Biqing Cai

    2015-04-01

    Full Text Available This paper discusses nonparametric kernel regression with the regressor being a \\(d\\-dimensional \\(\\beta\\-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \\(\\sqrt{n(Th^{d}}\\, where \\(n(T\\ is the number of regenerations for a \\(\\beta\\-null recurrent process and the limiting distribution (with proper normalization is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.

  12. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  13. Right-Censored Nonparametric Regression: A Comparative Simulation Study

    Directory of Open Access Journals (Sweden)

    Dursun Aydın

    2016-11-01

    Full Text Available This paper introduces the operating of the selection criteria for right-censored nonparametric regression using smoothing spline. In order to transform the response variable into a variable that contains the right-censorship, we used the KaplanMeier weights proposed by [1], and [2]. The major problem in smoothing spline method is to determine a smoothing parameter to obtain nonparametric estimates of the regression function. In this study, the mentioned parameter is chosen based on censored data by means of the criteria such as improved Akaike information criterion (AICc, Bayesian (or Schwarz information criterion (BIC and generalized crossvalidation (GCV. For this purpose, a Monte-Carlo simulation study is carried out to illustrate which selection criterion gives the best estimation for censored data.

  14. Nonparametric statistical tests for the continuous data: the basic concept and the practical use.

    Science.gov (United States)

    Nahm, Francis Sahngun

    2016-02-01

    Conventional statistical tests are usually called parametric tests. Parametric tests are used more frequently than nonparametric tests in many medical articles, because most of the medical researchers are familiar with and the statistical software packages strongly support parametric tests. Parametric tests require important assumption; assumption of normality which means that distribution of sample means is normally distributed. However, parametric test can be misleading when this assumption is not satisfied. In this circumstance, nonparametric tests are the alternative methods available, because they do not required the normality assumption. Nonparametric tests are the statistical methods based on signs and ranks. In this article, we will discuss about the basic concepts and practical use of nonparametric tests for the guide to the proper use.

  15. Analysis of Xq27-28 linkage in the international consortium for prostate cancer genetics (ICPCG families

    Directory of Open Access Journals (Sweden)

    Bailey-Wilson Joan E

    2012-06-01

    Full Text Available Abstract Background Genetic variants are likely to contribute to a portion of prostate cancer risk. Full elucidation of the genetic etiology of prostate cancer is difficult because of incomplete penetrance and genetic and phenotypic heterogeneity. Current evidence suggests that genetic linkage to prostate cancer has been found on several chromosomes including the X; however, identification of causative genes has been elusive. Methods Parametric and non-parametric linkage analyses were performed using 26 microsatellite markers in each of 11 groups of multiple-case prostate cancer families from the International Consortium for Prostate Cancer Genetics (ICPCG. Meta-analyses of the resultant family-specific linkage statistics across the entire 1,323 families and in several predefined subsets were then performed. Results Meta-analyses of linkage statistics resulted in a maximum parametric heterogeneity lod score (HLOD of 1.28, and an allele-sharing lod score (LOD of 2.0 in favor of linkage to Xq27-q28 at 138 cM. In subset analyses, families with average age at onset less than 65 years exhibited a maximum HLOD of 1.8 (at 138 cM versus a maximum regional HLOD of only 0.32 in families with average age at onset of 65 years or older. Surprisingly, the subset of families with only 2–3 affected men and some evidence of male-to-male transmission of prostate cancer gave the strongest evidence of linkage to the region (HLOD = 3.24, 134 cM. For this subset, the HLOD was slightly increased (HLOD = 3.47 at 134 cM when families used in the original published report of linkage to Xq27-28 were excluded. Conclusions Although there was not strong support for linkage to the Xq27-28 region in the complete set of families, the subset of families with earlier age at onset exhibited more evidence of linkage than families with later onset of disease. A subset of families with 2–3 affected individuals and with some evidence of male to male disease transmission

  16. A novel nonparametric confidence interval for differences of proportions for correlated binary data.

    Science.gov (United States)

    Duan, Chongyang; Cao, Yingshu; Zhou, Lizhi; Tan, Ming T; Chen, Pingyan

    2016-11-16

    Various confidence interval estimators have been developed for differences in proportions resulted from correlated binary data. However, the width of the mostly recommended Tango's score confidence interval tends to be wide, and the computing burden of exact methods recommended for small-sample data is intensive. The recently proposed rank-based nonparametric method by treating proportion as special areas under receiver operating characteristic provided a new way to construct the confidence interval for proportion difference on paired data, while the complex computation limits its application in practice. In this article, we develop a new nonparametric method utilizing the U-statistics approach for comparing two or more correlated areas under receiver operating characteristics. The new confidence interval has a simple analytic form with a new estimate of the degrees of freedom of n - 1. It demonstrates good coverage properties and has shorter confidence interval widths than that of Tango. This new confidence interval with the new estimate of degrees of freedom also leads to coverage probabilities that are an improvement on the rank-based nonparametric confidence interval. Comparing with the approximate exact unconditional method, the nonparametric confidence interval demonstrates good coverage properties even in small samples, and yet they are very easy to implement computationally. This nonparametric procedure is evaluated using simulation studies and illustrated with three real examples. The simplified nonparametric confidence interval is an appealing choice in practice for its ease of use and good performance. © The Author(s) 2016.

  17. Nonparametric correlation models for portfolio allocation

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Casas, Isabel

    2013-01-01

    breaks in correlations. Only when correlations are constant does the parametric DCC model deliver the best outcome. The methodologies are illustrated by evaluating two interesting portfolios. The first portfolio consists of the equity sector SPDRs and the S&P 500, while the second one contains major......This article proposes time-varying nonparametric and semiparametric estimators of the conditional cross-correlation matrix in the context of portfolio allocation. Simulations results show that the nonparametric and semiparametric models are best in DGPs with substantial variability or structural...... currencies. Results show the nonparametric model generally dominates the others when evaluating in-sample. However, the semiparametric model is best for out-of-sample analysis....

  18. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models

    CERN Document Server

    Fan, Jianqing; Song, Rui

    2011-01-01

    A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data a...

  19. Correlated Non-Parametric Latent Feature Models

    CERN Document Server

    Doshi-Velez, Finale

    2012-01-01

    We are often interested in explaining data through a set of hidden factors or features. When the number of hidden features is unknown, the Indian Buffet Process (IBP) is a nonparametric latent feature model that does not bound the number of active features in dataset. However, the IBP assumes that all latent features are uncorrelated, making it inadequate for many realworld problems. We introduce a framework for correlated nonparametric feature models, generalising the IBP. We use this framework to generate several specific models and demonstrate applications on realworld datasets.

  20. A Censored Nonparametric Software Reliability Model

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper analyses the effct of censoring on the estimation of failure rate, and presents a framework of a censored nonparametric software reliability model. The model is based on nonparametric testing of failure rate monotonically decreasing and weighted kernel failure rate estimation under the constraint of failure rate monotonically decreasing. Not only does the model have the advantages of little assumptions and weak constraints, but also the residual defects number of the software system can be estimated. The numerical experiment and real data analysis show that the model performs well with censored data.

  1. Nonparametric correlation models for portfolio allocation

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Casas, Isabel

    2013-01-01

    This article proposes time-varying nonparametric and semiparametric estimators of the conditional cross-correlation matrix in the context of portfolio allocation. Simulations results show that the nonparametric and semiparametric models are best in DGPs with substantial variability or structural...... breaks in correlations. Only when correlations are constant does the parametric DCC model deliver the best outcome. The methodologies are illustrated by evaluating two interesting portfolios. The first portfolio consists of the equity sector SPDRs and the S&P 500, while the second one contains major...

  2. Linkage methods for connecting children with parents in electronic health record and state public health insurance data.

    Science.gov (United States)

    Angier, Heather; Gold, Rachel; Crawford, Courtney; P O'Malley, Jean; J Tillotson, Carrie; Marino, Miguel; DeVoe, Jennifer E

    2014-11-01

    The objective of this study was to develop methodologies for creating child-parent 'links' in two healthcare-related data sources. We linked children and parents who were patients in a network of Oregon clinics with a shared electronic health record (EHR), using data that reported the child's emergency contact information or the 'guarantor' for the child's visits. We also linked children and parents enrolled in the Oregon Health Plan (OHP; Oregon's public health insurance programs), using administrative data; here, we defined a 'child' as aged parents' from among adults sharing the same OHP household identification (ID) number. In both data sources, parents had to be 12-55 years older than the child. We used OHP individual client ID and EHR patient ID numbers to assess the quality of our linkages through cross-validation. Of the 249,079 children in the EHR dataset, we identified 62,967 who had a 'linkable' parent with patient information in the EHR. In the OHP data, 889,452 household IDs were assigned to at least one child; 525,578 with a household ID had a 'linkable' parent (272,578 households). Cross-validation of linkages revealed 99.8 % of EHR links validated in OHP data and 97.7 % of OHP links validated in EHR data. The ability to link children and their parents in healthcare-related datasets will be useful to inform efforts to improve children's health. Thus, we developed strategies for linking children with their parents in an EHR and a public health insurance administrative dataset.

  3. VT Wildlife Linkage Habitat

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) The Wildlife Linkage Habitat Analysis uses landscape scale data to identify or predict the location of potentially significant wildlife linkage...

  4. A Bayesian Nonparametric Approach to Test Equating

    Science.gov (United States)

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  5. How Are Teachers Teaching? A Nonparametric Approach

    Science.gov (United States)

    De Witte, Kristof; Van Klaveren, Chris

    2014-01-01

    This paper examines which configuration of teaching activities maximizes student performance. For this purpose a nonparametric efficiency model is formulated that accounts for (1) self-selection of students and teachers in better schools and (2) complementary teaching activities. The analysis distinguishes both individual teaching (i.e., a…

  6. Decompounding random sums: A nonparametric approach

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...

  7. A Nonparametric Analogy of Analysis of Covariance

    Science.gov (United States)

    Burnett, Thomas D.; Barr, Donald R.

    1977-01-01

    A nonparametric test of the hypothesis of no treatment effect is suggested for a situation where measures of the severity of the condition treated can be obtained and ranked both pre- and post-treatment. The test allows the pre-treatment rank to be used as a concomitant variable. (Author/JKS)

  8. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  9. How Are Teachers Teaching? A Nonparametric Approach

    Science.gov (United States)

    De Witte, Kristof; Van Klaveren, Chris

    2014-01-01

    This paper examines which configuration of teaching activities maximizes student performance. For this purpose a nonparametric efficiency model is formulated that accounts for (1) self-selection of students and teachers in better schools and (2) complementary teaching activities. The analysis distinguishes both individual teaching (i.e., a…

  10. Generative Temporal Modelling of Neuroimaging - Decomposition and Nonparametric Testing

    DEFF Research Database (Denmark)

    Hald, Ditte Høvenhoff

    The goal of this thesis is to explore two improvements for functional magnetic resonance imaging (fMRI) analysis; namely our proposed decomposition method and an extension to the non-parametric testing framework. Analysis of fMRI allows researchers to investigate the functional processes...... of the brain, and provides insight into neuronal coupling during mental processes or tasks. The decomposition method is a Gaussian process-based independent components analysis (GPICA), which incorporates a temporal dependency in the sources. A hierarchical model specification is used, featuring both...

  11. 非参数认知诊断方法:多级评分的聚类分析%Nonparametric Cognitive Diagnosis:A Cluster Diagnostic Method Based on Grade Response Items

    Institute of Scientific and Technical Information of China (English)

    康春花; 任平; 曾平飞

    2015-01-01

    Examinations help students learn more efficiently by filling their learning gaps. To achieve this goal, we have to differentiate students who have from those who have not mastered a set of attributes as measured by the test through cognitive diagnostic assessment. K-means cluster analysis, being a nonparametric cognitive diagnosis method requires the Q-matrix only, which reflects the relationship between attributes and items. This does not require the estimation of the parameters, so is independent of sample size, simple to operate, and easy to understand. Previous research use the sum score vectors or capability scores vector as the clustering objects. These methods are only adaptive for dichotomous data. Structural response items are, however, the main type used in examinations, particularly as required in recent reforms. On the basis of previous research, this paper puts forward a method to calculate a capability matrix reflecting the mastery level on skills and is applicable to grade response items. Our study included four parts. First, we introduced the K-means cluster diagnosis method which has been adapted for dichotomous data. Second, we expanded the K-means cluster diagnosis method for grade response data (GRCDM). Third, in Part Two, we investigated the performance of the method introduced using a simulation study. Fourth, we investigated the performance of the method in an empirical study. The simulation study focused on three factors. First, the sample size was set to be 100, 500, and 1000. Second, the percentage of random errors was manipulated to be 5%, 10%, and 20%. Third, it had four hierarchies, as proposed by Leighton. All experimental conditions composed of seven attributes, different items according to hierarchies. Simulation results showed that: (1) GRCDM had a high pattern match ratio (PMR) and high marginal match ratio (MMR). This method was shown to be feasible in cognitive diagnostic assessment. (2) The classification accuracy (MMR and PMR

  12. Binary Classifier Calibration Using a Bayesian Non-Parametric Approach.

    Science.gov (United States)

    Naeini, Mahdi Pakdaman; Cooper, Gregory F; Hauskrecht, Milos

    Learning probabilistic predictive models that are well calibrated is critical for many prediction and decision-making tasks in Data mining. This paper presents two new non-parametric methods for calibrating outputs of binary classification models: a method based on the Bayes optimal selection and a method based on the Bayesian model averaging. The advantage of these methods is that they are independent of the algorithm used to learn a predictive model, and they can be applied in a post-processing step, after the model is learned. This makes them applicable to a wide variety of machine learning models and methods. These calibration methods, as well as other methods, are tested on a variety of datasets in terms of both discrimination and calibration performance. The results show the methods either outperform or are comparable in performance to the state-of-the-art calibration methods.

  13. Linkage and association analysis of CACNG3 in childhood absence epilepsy

    DEFF Research Database (Denmark)

    Everett, Kate V; Chioza, Barry; Aicardi, Jean

    2007-01-01

    and association analysis. Assuming locus heterogeneity, a significant HLOD score (HLOD = 3.54, alpha = 0.62) was obtained for markers encompassing CACNG3 in 65 nuclear families with a proband with CAE. The maximum non-parametric linkage score was 2.87 (P ... and the 65 nuclear pedigrees. Evidence for transmission disequilibrium (P

  14. Nonparametric tests for pathwise properties of semimartingales

    CERN Document Server

    Cont, Rama; 10.3150/10-BEJ293

    2011-01-01

    We propose two nonparametric tests for investigating the pathwise properties of a signal modeled as the sum of a L\\'{e}vy process and a Brownian semimartingale. Using a nonparametric threshold estimator for the continuous component of the quadratic variation, we design a test for the presence of a continuous martingale component in the process and a test for establishing whether the jumps have finite or infinite variation, based on observations on a discrete-time grid. We evaluate the performance of our tests using simulations of various stochastic models and use the tests to investigate the fine structure of the DM/USD exchange rate fluctuations and SPX futures prices. In both cases, our tests reveal the presence of a non-zero Brownian component and a finite variation jump component.

  15. Nonparametric Transient Classification using Adaptive Wavelets

    CERN Document Server

    Varughese, Melvin M; Stephanou, Michael; Bassett, Bruce A

    2015-01-01

    Classifying transients based on multi band light curves is a challenging but crucial problem in the era of GAIA and LSST since the sheer volume of transients will make spectroscopic classification unfeasible. Here we present a nonparametric classifier that uses the transient's light curve measurements to predict its class given training data. It implements two novel components: the first is the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients. The second novelty is the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The ranked classifier is simple and quick to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant, hence they do not need the light curves to be aligned to extract features. Further, BAGIDIS is nonparametric so it can be used for blind ...

  16. A Bayesian nonparametric meta-analysis model.

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G

    2015-03-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall effect size, such models may be adequate, but for prediction, they surely are not if the effect-size distribution exhibits non-normal behavior. To address this issue, we propose a Bayesian nonparametric meta-analysis model, which can describe a wider range of effect-size distributions, including unimodal symmetric distributions, as well as skewed and more multimodal distributions. We demonstrate our model through the analysis of real meta-analytic data arising from behavioral-genetic research. We compare the predictive performance of the Bayesian nonparametric model against various conventional and more modern normal fixed-effects and random-effects models.

  17. Linkage between passenger demand and surrounding land-use patterns at urban rail transit stations: A canonical correlation analysis method and case study in Chongqing

    Directory of Open Access Journals (Sweden)

    Xin Li

    2016-08-01

    Full Text Available This paper employs a canonical correlation analysis method to quantify the linkage between urban rail transit station demand and the surrounding land-use patterns. It is used to identify key land use variables by evaluating their degrees of contribution to the rail transit station demand. A full month of IC card data and detailed regulatory land use plan from Chongqing, China are collected for model development and validation. The proposed model contributes to offering the capability of targeting key land use patterns and associating them with rail transit station boarding and alighting demand simultaneously. The proposed model can reveal underlying rules between rail transit station demand and land use variables and can be used to evaluate the Transit Oriented Development (TOD plans to improve land use and transit operational efficiency.

  18. Bayesian nonparametric estimation for Quantum Homodyne Tomography

    OpenAIRE

    Naulet, Zacharie; Barat, Eric

    2016-01-01

    We estimate the quantum state of a light beam from results of quantum homodyne tomography noisy measurements performed on identically prepared quantum systems. We propose two Bayesian nonparametric approaches. The first approach is based on mixture models and is illustrated through simulation examples. The second approach is based on random basis expansions. We study the theoretical performance of the second approach by quantifying the rate of contraction of the posterior distribution around ...

  19. A non-parametric framework for estimating threshold limit values

    Directory of Open Access Journals (Sweden)

    Ulm Kurt

    2005-11-01

    Full Text Available Abstract Background To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives. Methods We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis. Results In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak. Conclusion The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.

  20. Introduction to nonparametric statistics for the biological sciences using R

    CERN Document Server

    MacFarland, Thomas W

    2016-01-01

    This book contains a rich set of tools for nonparametric analyses, and the purpose of this supplemental text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses a...

  1. Sensor-less Field Oriented Control of Wind Turbine Driven Permanent Magnet Synchronous Generator Using Flux Linkage and Back EMF Estimation Methods

    Directory of Open Access Journals (Sweden)

    Porselvi Thayumanavan

    2014-05-01

    Full Text Available The study aims at the speed control of the wind turbine driven Permanent Magnet Synchronous Generator (PMSG by sensor-less Field Oriented Control (FOC method. Two methods of sensor-less FOC are proposed to control the speed and torque of the PMSG. The PMSG and the full-scale converter have an increasing market share in variable speed Wind Energy Conversion System (WECS. When compared to the Induction Generators (IGs, the PMSGs are smaller, easier to control and more efficient. In addition, the PMSG can operate at variable speeds, so that the maximum power can be extracted even at low or medium wind speeds. Wind turbines generally employ speed sensors or shaft position encoders to determine the speed and the position of the rotor. In order to reduce the cost, maintenance and complexity concerned with the sensor, the sensor-less approach has been developed. This study presents the sensor-less control techniques using the flux-linkage and the back EMF estimation methods. Simulations for both the methods are carried out in MATLAB/SIMULINK. The simulated waveforms of the reference speed, the measured speed, the reference torque, the measured torque and rotor position are shown for both the methods.

  2. Inflammatory bowel disease gene hunting by linkage analysis: rationale, methodology, and present status of the field.

    Science.gov (United States)

    Brant, Steven R; Shugart, Yin Yao

    2004-05-01

    Observed inflammatory bowel disease (IBD) familial clustering and increased monozygotic twin concordance has led to the hypothesis that genetic loci containing IBD susceptibility genes can be identified by whole genome linkage mapping approaches. Methodology including collecting carefully phenotyped multiplex pedigrees, genotyping using highly informative microsatellite markers and linkage analysis by non-parametric allele sharing methods has been established. Eleven published genome wide screens (GWS) have studied more than 1,200 multiplex IBD pedigrees. Two-thirds of affected relative pairs were Crohn's disease (CD), 20% ulcerative colitis (UC) and the remaining were mixed. Seven loci (IBDI-7) on chromosomes 16q, 12, 6p, 14q, 5q, 19, and 1p have been identified with genome wide significant and independently replicated linkage. Risk alleles/haplotypes have been defined for the IBD1 (CARD15/NOD2), IBD3 (HLA) and IBD5 (5q cytokine cluster) loci. There has been evidence for a second chromosome 16 locus (IBD8) independent of NOD2 that overlaps IBD1 on the pericentromeric p-arm. Several other regions show great promise for containing additional IBD loci, particularly chromosome 3p with genome wide evidence in one study at 3p26 and more centromeric evidence in several other studies, and chromosomes 2q, 3q, 4q, 7, 11p, and Xp each with suggestive evidence of linkage in one and additional evidence in two or more studies. Single GWSs and fine mapping studies containing very large sets of pedigrees and in particular, more UC pedigrees, and the use of creative analytic and disease stratification schemes are required to identify, establish and refine weaker IBD loci.

  3. A high-density screen for linkage in multiple sclerosis.

    Science.gov (United States)

    Sawcer, Stephen; Ban, Maria; Maranian, Mel; Yeo, Tai Wai; Compston, Alastair; Kirby, Andrew; Daly, Mark J; De Jager, Philip L; Walsh, Emily; Lander, Eric S; Rioux, John D; Hafler, David A; Ivinson, Adrian; Rimmler, Jacqueline; Gregory, Simon G; Schmidt, Silke; Pericak-Vance, Margaret A; Akesson, Eva; Hillert, Jan; Datta, Pameli; Oturai, Annette; Ryder, Lars P; Harbo, Hanne F; Spurkland, Anne; Myhr, Kjell-Morten; Laaksonen, Mikko; Booth, David; Heard, Robert; Stewart, Graeme; Lincoln, Robin; Barcellos, Lisa F; Hauser, Stephen L; Oksenberg, Jorge R; Kenealy, Shannon J; Haines, Jonathan L

    2005-09-01

    To provide a definitive linkage map for multiple sclerosis, we have genotyped the Illumina BeadArray linkage mapping panel (version 4) in a data set of 730 multiplex families of Northern European descent. After the application of stringent quality thresholds, data from 4,506 markers in 2,692 individuals were included in the analysis. Multipoint nonparametric linkage analysis revealed highly significant linkage in the major histocompatibility complex (MHC) on chromosome 6p21 (maximum LOD score [MLS] 11.66) and suggestive linkage on chromosomes 17q23 (MLS 2.45) and 5q33 (MLS 2.18). This set of markers achieved a mean information extraction of 79.3% across the genome, with a Mendelian inconsistency rate of only 0.002%. Stratification based on carriage of the multiple sclerosis-associated DRB1*1501 allele failed to identify any other region of linkage with genomewide significance. However, ordered-subset analysis suggested that there may be an additional locus on chromosome 19p13 that acts independent of the main MHC locus. These data illustrate the substantial increase in power that can be achieved with use of the latest tools emerging from the Human Genome Project and indicate that future attempts to systematically identify susceptibility genes for multiple sclerosis will have to involve large sample sizes and an association-based methodology.

  4. Aspects of record linkage

    NARCIS (Netherlands)

    Schraagen, Marijn Paul

    2014-01-01

    This thesis is an exploration of the subject of historical record linkage. The general goal of historical record linkage is to discover relations between historical entities in a database, for any specific definition of relation, entity and database. Although this task originates from historical

  5. Subsidiary Linkage Patterns

    DEFF Research Database (Denmark)

    Andersson, Ulf; Perri, Alessandra; Nell, Phillip C.

    2012-01-01

    This paper investigates the pattern of subsidiaries' local vertical linkages under varying levels of competition and subsidiary capabilities. Contrary to most previous literature, we explicitly account for the double role of such linkages as conduits of learning prospects as well as potential...

  6. Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization

    Science.gov (United States)

    Lee, Kyungbook; Song, Seok Goo

    2016-10-01

    Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events (M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.

  7. Wavelet Estimators in Nonparametric Regression: A Comparative Simulation Study

    Directory of Open Access Journals (Sweden)

    Anestis Antoniadis

    2001-06-01

    Full Text Available Wavelet analysis has been found to be a powerful tool for the nonparametric estimation of spatially-variable objects. We discuss in detail wavelet methods in nonparametric regression, where the data are modelled as observations of a signal contaminated with additive Gaussian noise, and provide an extensive review of the vast literature of wavelet shrinkage and wavelet thresholding estimators developed to denoise such data. These estimators arise from a wide range of classical and empirical Bayes methods treating either individual or blocks of wavelet coefficients. We compare various estimators in an extensive simulation study on a variety of sample sizes, test functions, signal-to-noise ratios and wavelet filters. Because there is no single criterion that can adequately summarise the behaviour of an estimator, we use various criteria to measure performance in finite sample situations. Insight into the performance of these estimators is obtained from graphical outputs and numerical tables. In order to provide some hints of how these estimators should be used to analyse real data sets, a detailed practical step-by-step illustration of a wavelet denoising analysis on electrical consumption is provided. Matlab codes are provided so that all figures and tables in this paper can be reproduced.

  8. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    Science.gov (United States)

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  9. Multivariate nonparametric regression and visualization with R and applications to finance

    CERN Document Server

    Klemelä, Jussi

    2014-01-01

    A modern approach to statistical learning and its applications through visualization methods With a unique and innovative presentation, Multivariate Nonparametric Regression and Visualization provides readers with the core statistical concepts to obtain complete and accurate predictions when given a set of data. Focusing on nonparametric methods to adapt to the multiple types of data generatingmechanisms, the book begins with an overview of classification and regression. The book then introduces and examines various tested and proven visualization techniques for learning samples and functio

  10. Of River Linkage and Issue Linkage

    NARCIS (Netherlands)

    Warner, Jeroen Frank

    2016-01-01

    It is a truism in mainstream International Relations that issue linkage promotes regime formation and integration. The present article applies this idea to the transboundary lower river Meuse and finds its history of integration to be a tortuous one. Contextual political factors have at times

  11. Analyzing single-molecule time series via nonparametric Bayesian inference.

    Science.gov (United States)

    Hines, Keegan E; Bankston, John R; Aldrich, Richard W

    2015-02-03

    The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Nonparametric estimation of population density for line transect sampling using FOURIER series

    Science.gov (United States)

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  13. A nonparametric and diversified portfolio model

    Science.gov (United States)

    Shirazi, Yasaman Izadparast; Sabiruzzaman, Md.; Hamzah, Nor Aishah

    2014-07-01

    Traditional portfolio models, like mean-variance (MV) suffer from estimation error and lack of diversity. Alternatives, like mean-entropy (ME) or mean-variance-entropy (MVE) portfolio models focus independently on the issue of either a proper risk measure or the diversity. In this paper, we propose an asset allocation model that compromise between risk of historical data and future uncertainty. In the new model, entropy is presented as a nonparametric risk measure as well as an index of diversity. Our empirical evaluation with a variety of performance measures shows that this model has better out-of-sample performances and lower portfolio turnover than its competitors.

  14. Parametric versus non-parametric simulation

    OpenAIRE

    Dupeux, Bérénice; Buysse, Jeroen

    2014-01-01

    Most of ex-ante impact assessment policy models have been based on a parametric approach. We develop a novel non-parametric approach, called Inverse DEA. We use non parametric efficiency analysis for determining the farm’s technology and behaviour. Then, we compare the parametric approach and the Inverse DEA models to a known data generating process. We use a bio-economic model as a data generating process reflecting a real world situation where often non-linear relationships exist. Results s...

  15. Preliminary results on nonparametric facial occlusion detection

    Directory of Open Access Journals (Sweden)

    Daniel LÓPEZ SÁNCHEZ

    2016-10-01

    Full Text Available The problem of face recognition has been extensively studied in the available literature, however, some aspects of this field require further research. The design and implementation of face recognition systems that can efficiently handle unconstrained conditions (e.g. pose variations, illumination, partial occlusion... is still an area under active research. This work focuses on the design of a new nonparametric occlusion detection technique. In addition, we present some preliminary results that indicate that the proposed technique might be useful to face recognition systems, allowing them to dynamically discard occluded face parts.

  16. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  17. Nonparametric Analyses of Log-Periodic Precursors to Financial Crashes

    Science.gov (United States)

    Zhou, Wei-Xing; Sornette, Didier

    We apply two nonparametric methods to further test the hypothesis that log-periodicity characterizes the detrended price trajectory of large financial indices prior to financial crashes or strong corrections. The term "parametric" refers here to the use of the log-periodic power law formula to fit the data; in contrast, "nonparametric" refers to the use of general tools such as Fourier transform, and in the present case the Hilbert transform and the so-called (H, q)-analysis. The analysis using the (H, q)-derivative is applied to seven time series ending with the October 1987 crash, the October 1997 correction and the April 2000 crash of the Dow Jones Industrial Average (DJIA), the Standard & Poor 500 and Nasdaq indices. The Hilbert transform is applied to two detrended price time series in terms of the ln(tc-t) variable, where tc is the time of the crash. Taking all results together, we find strong evidence for a universal fundamental log-frequency f=1.02±0.05 corresponding to the scaling ratio λ=2.67±0.12. These values are in very good agreement with those obtained in earlier works with different parametric techniques. This note is extracted from a long unpublished report with 58 figures available at , which extensively describes the evidence we have accumulated on these seven time series, in particular by presenting all relevant details so that the reader can judge for himself or herself the validity and robustness of the results.

  18. Bayesian nonparametric centered random effects models with variable selection.

    Science.gov (United States)

    Yang, Mingan

    2013-03-01

    In a linear mixed effects model, it is common practice to assume that the random effects follow a parametric distribution such as a normal distribution with mean zero. However, in the case of variable selection, substantial violation of the normality assumption can potentially impact the subset selection and result in poor interpretation and even incorrect results. In nonparametric random effects models, the random effects generally have a nonzero mean, which causes an identifiability problem for the fixed effects that are paired with the random effects. In this article, we focus on a Bayesian method for variable selection. We characterize the subject-specific random effects nonparametrically with a Dirichlet process and resolve the bias simultaneously. In particular, we propose flexible modeling of the conditional distribution of the random effects with changes across the predictor space. The approach is implemented using a stochastic search Gibbs sampler to identify subsets of fixed effects and random effects to be included in the model. Simulations are provided to evaluate and compare the performance of our approach to the existing ones. We then apply the new approach to a real data example, cross-country and interlaboratory rodent uterotrophic bioassay.

  19. An estimating function approach to linkage heterogeneity

    Indian Academy of Sciences (India)

    He Gao; Ying Zhou; Weijun Ma; Haidong Liu; Linan Zhao

    2013-12-01

    Testing linkage heterogeneity between two loci is an important issue in genetics. Currently, there are four methods (K-test, A-test, B-test and D-test) for testing linkage heterogeneity in linkage analysis, which are based on the likelihood-ratio test. Among them, the commonly used methods are the K-test and A-test. In this paper, we present a novel test method which is different from the above four tests, called G-test. The new test statistic is based on estimating function, possessing a theoretic asymptotic distribution, and therefore demonstrates its own advantages. The proposed test is applied to analyse a real pedigree dataset. Our simulation results also indicate that the G-test performs well in terms of power of testing linkage heterogeneity and outperforms the current methods to some degree.

  20. Local kernel nonparametric discriminant analysis for adaptive extraction of complex structures

    Science.gov (United States)

    Li, Quanbao; Wei, Fajie; Zhou, Shenghan

    2017-05-01

    The linear discriminant analysis (LDA) is one of popular means for linear feature extraction. It usually performs well when the global data structure is consistent with the local data structure. Other frequently-used approaches of feature extraction usually require linear, independence, or large sample condition. However, in real world applications, these assumptions are not always satisfied or cannot be tested. In this paper, we introduce an adaptive method, local kernel nonparametric discriminant analysis (LKNDA), which integrates conventional discriminant analysis with nonparametric statistics. LKNDA is adept in identifying both complex nonlinear structures and the ad hoc rule. Six simulation cases demonstrate that LKNDA have both parametric and nonparametric algorithm advantages and higher classification accuracy. Quartic unilateral kernel function may provide better robustness of prediction than other functions. LKNDA gives an alternative solution for discriminant cases of complex nonlinear feature extraction or unknown feature extraction. At last, the application of LKNDA in the complex feature extraction of financial market activities is proposed.

  1. Non-parametric seismic hazard analysis in the presence of incomplete data

    Science.gov (United States)

    Yazdani, Azad; Mirzaei, Sajjad; Dadkhah, Koroush

    2017-01-01

    The distribution of earthquake magnitudes plays a crucial role in the estimation of seismic hazard parameters. Due to the complexity of earthquake magnitude distribution, non-parametric approaches are recommended over classical parametric methods. The main deficiency of the non-parametric approach is the lack of complete magnitude data in almost all cases. This study aims to introduce an imputation procedure for completing earthquake catalog data that will allow the catalog to be used for non-parametric density estimation. Using a Monte Carlo simulation, the efficiency of introduced approach is investigated. This study indicates that when a magnitude catalog is incomplete, the imputation procedure can provide an appropriate tool for seismic hazard assessment. As an illustration, the imputation procedure was applied to estimate earthquake magnitude distribution in Tehran, the capital city of Iran.

  2. Nonparametric estimation of stochastic differential equations with sparse Gaussian processes

    Science.gov (United States)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2017-08-01

    The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.

  3. Indoor Positioning Using Nonparametric Belief Propagation Based on Spanning Trees

    Directory of Open Access Journals (Sweden)

    Savic Vladimir

    2010-01-01

    Full Text Available Nonparametric belief propagation (NBP is one of the best-known methods for cooperative localization in sensor networks. It is capable of providing information about location estimation with appropriate uncertainty and to accommodate non-Gaussian distance measurement errors. However, the accuracy of NBP is questionable in loopy networks. Therefore, in this paper, we propose a novel approach, NBP based on spanning trees (NBP-ST created by breadth first search (BFS method. In addition, we propose a reliable indoor model based on obtained measurements in our lab. According to our simulation results, NBP-ST performs better than NBP in terms of accuracy and communication cost in the networks with high connectivity (i.e., highly loopy networks. Furthermore, the computational and communication costs are nearly constant with respect to the transmission radius. However, the drawbacks of proposed method are a little bit higher computational cost and poor performance in low-connected networks.

  4. Evaluation of Nonparametric Probabilistic Forecasts of Wind Power

    DEFF Research Database (Denmark)

    Pinson, Pierre; Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg, orlov 31.07.2008;

    likely outcome for each look-ahead time, but also with uncertainty estimates given by probabilistic forecasts. In order to avoid assumptions on the shape of predictive distributions, these probabilistic predictions are produced from nonparametric methods, and then take the form of a single or a set...... of quantile forecasts. The required and desirable properties of such probabilistic forecasts are defined and a framework for their evaluation is proposed. This framework is applied for evaluating the quality of two statistical methods producing full predictive distributions from point predictions of wind......Predictions of wind power production for horizons up to 48-72 hour ahead comprise a highly valuable input to the methods for the daily management or trading of wind generation. Today, users of wind power predictions are not only provided with point predictions, which are estimates of the most...

  5. Subsidiary Linkage Patterns

    DEFF Research Database (Denmark)

    Perri, Alessandra; Andersson, Ulf; Nell, Phillip C.;

    This paper investigates local vertical linkages of foreign subsidiaries and the dual role of such linkages as conduits for learning as well as potential channels for spillovers to competitors. On the basis of data from 97 subsidiaries, we analyze the quality of such linkages under varying levels...... of competition and subsidiary capabilities. Our theoretical development and the results from the analysis document a far more complex and dynamic relationship between levels of competition and MNCs’ local participation in knowledge intensive activities, i.e. learning and spillovers, than previous studies do. We...

  6. Parametrically guided estimation in nonparametric varying coefficient models with quasi-likelihood.

    Science.gov (United States)

    Davenport, Clemontina A; Maity, Arnab; Wu, Yichao

    2015-04-01

    Varying coefficient models allow us to generalize standard linear regression models to incorporate complex covariate effects by modeling the regression coefficients as functions of another covariate. For nonparametric varying coefficients, we can borrow the idea of parametrically guided estimation to improve asymptotic bias. In this paper, we develop a guided estimation procedure for the nonparametric varying coefficient models. Asymptotic properties are established for the guided estimators and a method of bandwidth selection via bias-variance tradeoff is proposed. We compare the performance of the guided estimator with that of the unguided estimator via both simulation and real data examples.

  7. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  8. A nonparametric real option decision model based on the minimum relative entropy method%基于熵的多阶段非参实物期权决策模型

    Institute of Scientific and Technical Information of China (English)

    吕世瑜; 刘北上; 邱菀华

    2011-01-01

    Based on the polynomial option pricing model, a multi - stage nonparametric real option model for venture capital evaluation is established by introducing the minimum relative entropy theory. Empirical analysis shows that the model can effectively reduce the subjective impact since it helps us to draw the conclusion in light of the information collection of risk project rather than parameter hypothesis, which is a standard way for most of the pricing models proposed in the past.%本文在Copeland等人提出的多项式期权定价模型的基础上,通过引入最小相对熵原理,建立了多阶段风险投资非参实物期权决策模型,解决了多阶段风险投资估值决策的问题.实证表明,该模型能够使风险项目决策建立在信息收集的基础上,大大减少了参数假设等主观因素的影响,提高了模型的实用性.

  9. Robust Depth-Weighted Wavelet for Nonparametric Regression Models

    Institute of Scientific and Technical Information of China (English)

    Lu LIN

    2005-01-01

    In the nonpaxametric regression models, the original regression estimators including kernel estimator, Fourier series estimator and wavelet estimator are always constructed by the weighted sum of data, and the weights depend only on the distance between the design points and estimation points. As a result these estimators are not robust to the perturbations in data. In order to avoid this problem, a new nonparametric regression model, called the depth-weighted regression model, is introduced and then the depth-weighted wavelet estimation is defined. The new estimation is robust to the perturbations in data, which attains very high breakdown value close to 1/2. On the other hand, some asymptotic behaviours such as asymptotic normality are obtained. Some simulations illustrate that the proposed wavelet estimator is more robust than the original wavelet estimator and, as a price to pay for the robustness, the new method is slightly less efficient than the original method.

  10. Analyzing multiple spike trains with nonparametric Granger causality.

    Science.gov (United States)

    Nedungadi, Aatira G; Rangarajan, Govindan; Jain, Neeraj; Ding, Mingzhou

    2009-08-01

    Simultaneous recordings of spike trains from multiple single neurons are becoming commonplace. Understanding the interaction patterns among these spike trains remains a key research area. A question of interest is the evaluation of information flow between neurons through the analysis of whether one spike train exerts causal influence on another. For continuous-valued time series data, Granger causality has proven an effective method for this purpose. However, the basis for Granger causality estimation is autoregressive data modeling, which is not directly applicable to spike trains. Various filtering options distort the properties of spike trains as point processes. Here we propose a new nonparametric approach to estimate Granger causality directly from the Fourier transforms of spike train data. We validate the method on synthetic spike trains generated by model networks of neurons with known connectivity patterns and then apply it to neurons simultaneously recorded from the thalamus and the primary somatosensory cortex of a squirrel monkey undergoing tactile stimulation.

  11. DYNAMIC DESIGN OF VARIABLE SPEED PLANAR LINKAGES

    Institute of Scientific and Technical Information of China (English)

    Yao Yanan; Yan Hongsen; Zou Huijun

    2005-01-01

    A method for improving dynamic characteristics of planar linkages by actively varying the speed function of the input link is presented. Design criteria and constraints for the dynamic design of variable speed planar linkages are developed. Both analytical and optimization approaches for determining suitable input speed functions to minimize the driving torque, the shaking moment, or both simultaneously of planar linkages, subject to various design requirements and constraints, are derived.Finally, some examples are given to illustrate the design procedure and to verify its feasibility.

  12. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...

  13. Non-parametric system identification from non-linear stochastic response

    DEFF Research Database (Denmark)

    Rüdinger, Finn; Krenk, Steen

    2001-01-01

    An estimation method is proposed for identification of non-linear stiffness and damping of single-degree-of-freedom systems under stationary white noise excitation. Non-parametric estimates of the stiffness and damping along with an estimate of the white noise intensity are obtained by suitable p...

  14. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Varying Coefficient Models.

    Science.gov (United States)

    Fan, Jianqing; Ma, Yunbei; Dai, Wei

    2014-01-01

    The varying-coefficient model is an important class of nonparametric statistical model that allows us to examine how the effects of covariates vary with exposure variables. When the number of covariates is large, the issue of variable selection arises. In this paper, we propose and investigate marginal nonparametric screening methods to screen variables in sparse ultra-high dimensional varying-coefficient models. The proposed nonparametric independence screening (NIS) selects variables by ranking a measure of the nonparametric marginal contributions of each covariate given the exposure variable. The sure independent screening property is established under some mild technical conditions when the dimensionality is of nonpolynomial order, and the dimensionality reduction of NIS is quantified. To enhance the practical utility and finite sample performance, two data-driven iterative NIS methods are proposed for selecting thresholding parameters and variables: conditional permutation and greedy methods, resulting in Conditional-INIS and Greedy-INIS. The effectiveness and flexibility of the proposed methods are further illustrated by simulation studies and real data applications.

  15. Nonparametric k-nearest-neighbor entropy estimator.

    Science.gov (United States)

    Lombardi, Damiano; Pant, Sanjay

    2016-01-01

    A nonparametric k-nearest-neighbor-based entropy estimator is proposed. It improves on the classical Kozachenko-Leonenko estimator by considering nonuniform probability densities in the region of k-nearest neighbors around each sample point. It aims to improve the classical estimators in three situations: first, when the dimensionality of the random variable is large; second, when near-functional relationships leading to high correlation between components of the random variable are present; and third, when the marginal variances of random variable components vary significantly with respect to each other. Heuristics on the error of the proposed and classical estimators are presented. Finally, the proposed estimator is tested for a variety of distributions in successively increasing dimensions and in the presence of a near-functional relationship. Its performance is compared with a classical estimator, and a significant improvement is demonstrated.

  16. On Parametric (and Non-Parametric Variation

    Directory of Open Access Journals (Sweden)

    Neil Smith

    2009-11-01

    Full Text Available This article raises the issue of the correct characterization of ‘Parametric Variation’ in syntax and phonology. After specifying their theoretical commitments, the authors outline the relevant parts of the Principles–and–Parameters framework, and draw a three-way distinction among Universal Principles, Parameters, and Accidents. The core of the contribution then consists of an attempt to provide identity criteria for parametric, as opposed to non-parametric, variation. Parametric choices must be antecedently known, and it is suggested that they must also satisfy seven individually necessary and jointly sufficient criteria. These are that they be cognitively represented, systematic, dependent on the input, deterministic, discrete, mutually exclusive, and irreversible.

  17. 中介效应的点估计和区间估计:乘积分布法、非参数Bootstrap和MCMC法%Assessing Point and Interval Estimation for the Mediating Effect: Distribution of the Product, Nonparametric Bootstrap and Markov Chain Monte Carlo Methods

    Institute of Scientific and Technical Information of China (English)

    方杰; 张敏强

    2012-01-01

    针对中介效应ab的抽样分布往往不是正态分布的问题,学者近年提出了三类无需对ab的抽样分布进行任何限制且适用于中、小样本的方法,包括乘积分布法、非参数Bootstrap和马尔科夫链蒙特卡罗(MCMC)方法.采用模拟技术比较了三类方法在中介效应分析中的表现.结果发现:1)有先验信息的MCMC方法的ab点估计最准确;2)有先验信息的MCMC方法的统计功效最高,但付出了低估第Ⅰ类错误率的代价,偏差校正的非参数百分位Bootstrap方法的统计功效其次,但付出了高估第Ⅰ类错误率的代价;3)有先验信息的MCMC方法的中介效应区间估计最准确.结果表明,当有先验信息时,推荐使用有先验信息的MCMC方法;当先验信息不可得时,推荐使用偏差校正的非参数百分位Bootstrap方法.%Because few sampling distributions of mediating effect are normally distributed, in recent years, Classic approaches to assessing mediation (Baron & Kenny, 1986; Sobel, 1982) have been supplemented by computationally intensive methods such as nonparametric bootstrap, the distribution of the product methods, and Markov chain Monte Carlo (MCMC) methods. These approaches are suitable for medium or small sample size and do not impose the assumption of normality of the sampling distribution of mediating effects. However, little is known about how these methods perform relative to each other.This study extends Mackinnon and colleagues' (Mackinnon, Lockwood & Williams, 2004; Yuan & Mackinnon, 2009) works by conducting a simulation using R software. This simulation examines several approaches for assessing mediation. Three factors were considered in the simulation design: (a) sample size (N=25, 50, 100, 200, 1000); (b) parameter combinations (a=b=0, a=0.39 b=0, a=0 b=0.59, a=b=0.14, a=b=0.39, a=b=0.59); ? method for assessing mediation (distribute of the product method, nonparametric percentile Bootstrap method, bias-corrected nonparametric

  18. 空间连杆机构实现曲面精加工方法研究%Processing Method of Curved Surface Based on Spatial Linkage Mechanism

    Institute of Scientific and Technical Information of China (English)

    肖丽萍; 魏文军

    2011-01-01

    Compared with the free surface, the ruled surface is a linear set of parameters, it is obtained by a straight line moving in space along the curve trajectory, and its modern processing methods are all complex and costly. Spatial linkage mechanism has many features such as simple structure, easily control, sports continuous and so on. The initiative crank continuous rotating drives the mechanism, at the same time, the rod length, the motion pair length or the twist angle is changed, which the cutter with start position fixed on a point of the rod main moves along the given curve, and feed moves straightly along the rod itself, so the ruled surface was processed. The method is uncomplicated, and achieves high efficiency and high precision continuous line contact surface parts finishing, operating easily and saving cost.%与自由曲面相比,直纹面是一参数直线集,由直线L在空间沿轨迹曲线r运动得到,其加工方法技术较复杂,成本高.利用空间连杆机构结构简单、控制容易、运动连续等特点,主动曲柄的连续转动驱动机构运动,同时改变机构杆长、副长或扭角,可使起始点于连杆端点的刀具沿连杆运动曲线作主运动,并沿该连杆本身作直线进给运动,完成直纹面的切削加工.该方法技术原理简单成本低,并能实现高效率,高精度的曲面零件连续线接触精加工.

  19. Nonparametric Bayesian inference for multidimensional compound Poisson processes

    NARCIS (Netherlands)

    S. Gugushvili; F. van der Meulen; P. Spreij

    2015-01-01

    Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density r0 and intensity λ0. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context, whic

  20. Nonparametric Bayesian inference of the microcanonical stochastic block model

    CERN Document Server

    Peixoto, Tiago P

    2016-01-01

    A principled approach to characterize the hidden modular structure of networks is to formulate generative models, and then infer their parameters from data. When the desired structure is composed of modules or "communities", a suitable choice for this task is the stochastic block model (SBM), where nodes are divided into groups, and the placement of edges is conditioned on the group memberships. Here, we present a nonparametric Bayesian method to infer the modular structure of empirical networks, including the number of modules and their hierarchical organization. We focus on a microcanonical variant of the SBM, where the structure is imposed via hard constraints. We show how this simple model variation allows simultaneously for two important improvements over more traditional inference approaches: 1. Deeper Bayesian hierarchies, with noninformative priors replaced by sequences of priors and hyperpriors, that not only remove limitations that seriously degrade the inference on large networks, but also reveal s...

  1. Nonparametric reconstruction of the Om diagnostic to test LCDM

    CERN Document Server

    Escamilla-Rivera, Celia

    2015-01-01

    Cosmic acceleration is usually related with the unknown dark energy, which equation of state, w(z), is constrained and numerically confronted with independent astrophysical data. In order to make a diagnostic of w(z), the introduction of a null test of dark energy can be done using a diagnostic function of redshift, Om. In this work we present a nonparametric reconstruction of this diagnostic using the so-called Loess-Simex factory to test the concordance model with the advantage that this approach offers an alternative way to relax the use of priors and find a possible 'w' that reliably describe the data with no previous knowledge of a cosmological model. Our results demonstrate that the method applied to the dynamical Om diagnostic finds a preference for a dark energy model with equation of state w =-2/3, which correspond to a static domain wall network.

  2. Comparing linkage designs based on land facets to linkage designs based on focal species.

    Science.gov (United States)

    Brost, Brian M; Beier, Paul

    2012-01-01

    Least-cost modeling for focal species is the most widely used method for designing conservation corridors and linkages. However, these designs depend on today's land covers, which will be altered by climate change. We recently proposed an alternative approach based on land facets (recurring landscape units of relatively uniform topography and soils). The rationale is that corridors with high continuity of individual land facets will facilitate movement of species associated with each facet today and in the future. Conservation practitioners might like to know whether a linkage design based on land facets is likely to provide continuity of modeled breeding habitat for species needing connectivity today, and whether a linkage for focal species provides continuity and interspersion of land facets. To address these questions, we compared linkages designed for focal species and land facets in three landscapes in Arizona, USA. We used two variables to measure linkage utility, namely distances between patches of modeled breeding habitat for 5-16 focal species in each linkage, and resistance profiles for focal species and land facets between patches connected by the linkage. Compared to focal species designs, linkage designs based on land facets provided as much or more modeled habitat connectivity for 25 of 28 species-landscape combinations, failing only for the three species with the most narrowly distributed habitat. Compared to land facets designs, focal species linkages provided lower connectivity for about half the land facets in two landscapes. In areas where a focal species approach to linkage design is not possible, our results suggest that conservation practitioners may be able to implement a land facets approach with some confidence that the linkage design would serve most potential focal species. In areas where focal species designs are possible, we recommend using the land facet approach to complement, rather than replace, focal species approaches.

  3. Asymptotic theory of nonparametric regression estimates with censored data

    Institute of Scientific and Technical Information of China (English)

    施沛德; 王海燕; 张利华

    2000-01-01

    For regression analysis, some useful Information may have been lost when the responses are right censored. To estimate nonparametric functions, several estimates based on censored data have been proposed and their consistency and convergence rates have been studied in literat黵e, but the optimal rates of global convergence have not been obtained yet. Because of the possible Information loss, one may think that it is impossible for an estimate based on censored data to achieve the optimal rates of global convergence for nonparametric regression, which were established by Stone based on complete data. This paper constructs a regression spline estimate of a general nonparametric regression f unction based on right-censored response data, and proves, under some regularity condi-tions, that this estimate achieves the optimal rates of global convergence for nonparametric regression. Since the parameters for the nonparametric regression estimate have to be chosen based on a data driven criterion, we also obtai

  4. 2nd Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Manteiga, Wenceslao; Romo, Juan

    2016-01-01

    This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cádiz (Spain) between June 11–16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers...

  5. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei

    2011-07-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.

  6. Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Song, Rui

    2011-06-01

    A variable screening procedure via correlation learning was proposed in Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under general nonparametric models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, a data-driven thresholding and an iterative nonparametric independence screening (INIS) are also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.

  7. Testing for Constant Nonparametric Effects in General Semiparametric Regression Models with Interactions.

    Science.gov (United States)

    Wei, Jiawei; Carroll, Raymond J; Maity, Arnab

    2011-07-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.

  8. A genome-wide linkage scan for distinct subsets of schizophrenia characterized by age at onset and neurocognitive deficits.

    Directory of Open Access Journals (Sweden)

    Yin-Ju Lien

    Full Text Available BACKGROUND: As schizophrenia is genetically and phenotypically heterogeneous, targeting genetically informative phenotypes may help identify greater linkage signals. The aim of the study is to evaluate the genetic linkage evidence for schizophrenia in subsets of families with earlier age at onset or greater neurocognitive deficits. METHODS: Patients with schizophrenia (n  =  1,207 and their first-degree relatives (n  =  1,035 from 557 families with schizophrenia were recruited from six data collection field research centers throughout Taiwan. Subjects completed a face-to-face semi-structured interview, the Continuous Performance Test (CPT, the Wisconsin Card Sorting Test, and were genotyped with 386 microsatellite markers across the genome. RESULTS: A maximum nonparametric logarithm of odds (LOD score of 4.17 at 2q22.1 was found in 295 families ranked by increasing age at onset, which had significant increases in the maximum LOD score compared with those obtained in initial linkage analyses using all available families. Based on this subset, a further subsetting by false alarm rate on the undegraded and degraded CPT obtained further increase in the nested subset-based LOD on 2q22.1, with a score of 7.36 in 228 families and 7.71 in 243 families, respectively. CONCLUSION: We found possible evidence of linkage on chromosome 2q22.1 in families of schizophrenia patients with more CPT false alarm rates nested within the families with younger age at onset. These results highlight the importance of incorporating genetically informative phenotypes in unraveling the complex genetics of schizophrenia.

  9. Non-parametric Tuning of PID Controllers A Modified Relay-Feedback-Test Approach

    CERN Document Server

    Boiko, Igor

    2013-01-01

    The relay feedback test (RFT) has become a popular and efficient  tool used in process identification and automatic controller tuning. Non-parametric Tuning of PID Controllers couples new modifications of classical RFT with application-specific optimal tuning rules to form a non-parametric method of test-and-tuning. Test and tuning are coordinated through a set of common parameters so that a PID controller can obtain the desired gain or phase margins in a system exactly, even with unknown process dynamics. The concept of process-specific optimal tuning rules in the nonparametric setup, with corresponding tuning rules for flow, level pressure, and temperature control loops is presented in the text.   Common problems of tuning accuracy based on parametric and non-parametric approaches are addressed. In addition, the text treats the parametric approach to tuning based on the modified RFT approach and the exact model of oscillations in the system under test using the locus of a perturbedrelay system (LPRS) meth...

  10. 构建分子标记连锁图谱的一种新方法: 三点自交法%A New Method for Constructing Linkage Maps of Molecular Markers: Three-Point Selfcross Method

    Institute of Scientific and Technical Information of China (English)

    谭远德

    2001-01-01

    The method for constructing linkage maps has almost been so farthe three-point testcross in higher organisms. This method is, however, considerably limited to a requirement of a parent or line with three recessive genes that can be obtained by only cross breeding. In this paper, the three-point selfcross method for mapping was proposed. This method can also provide us with informations of mapping as the same as the three杙oint testcross method. However, the three杙oint selfcross method does not have any limitation of requiring a parent with three recessive genes. It was theoretically proved that this method could be used to map molecular marker loci in small sample of F2 population. Exactly to test both the methods for mapping, Fisher information content was applied to prove that the three-point selfcross method is powerful for mapping. Further to test this method, data of the first 6 loci were chosen from 12 RFLP loci detected in 333 F2 individuals in SAMPLE. RAW file in MAPMAKER/EXP version 3.0 that Lander et al. (1987) provided. The result showed that, like MAPMAKER program, the three-point selfcross method is powerful to detect linkage relationship among loci, to order them on linkage groups and to calculate distances between nearest neighbor loci. Besides, this method could provide us with the other information of map such as the positive or negative interference and couple or repulsion configuration.%作者从数学上导出了基因作图的三点自交方法。这一方法同三点测交法一样能提供各种作图信息,但不需要选育三隐性纯合基因亲本或品系,因而能大大提高作图功效。从理论上证明,该方法也适合于小群体作图分子标记连锁图谱。同时用Fisher单一观察信息(即F信息)量证明,三点自交法是一种有效的作图方法。应用MAPMAKER程序中所提供的老鼠F2群体中333个个体的12个RFLP标记位点中前6个位点的数据对三点自交法的作图功能进

  11. A Linkage Learning Genetic Algorithm with Linkage Matrix

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The goal of linkage learning, or building block identification, is the creation of a more effective Genetic Algorithm (GA). This paper proposes a new Linkage Learning Genetic Algorithms, named m-LLGA. With the linkage learning module and the linkage-based genetic operation, m-LLGA is not only able to learn and record the linkage information among genes without any prior knowledge of the function being optimized. It also can use the linkage information stored in the linkage matrix to guide the selection of crossover point. The preliminary experiments on two kinds of bounded difficulty problems and a TSP problem validated the performance of m-LLGA. The m-LLGA learns the linkage of different building blocks parallel and therefore solves these problems effectively; it can also reasonably reduce the probability of building blocks being disrupted by crossover at the same time give attention to getting away from local minimum.

  12. Cosmopolitan linkage disequilibrium maps

    Directory of Open Access Journals (Sweden)

    Gibson Jane

    2005-03-01

    Full Text Available Abstract Linkage maps have been invaluable for the positional cloning of many genes involved in severe human diseases. Standard genetic linkage maps have been constructed for this purpose from the Centre d'Etude du Polymorphisme Humain and other panels, and have been widely used. Now that attention has shifted towards identifying genes predisposing to common disorders using linkage disequilibrium (LD and maps of single nucleotide polymorphisms (SNPs, it is of interest to consider a standard LD map which is somewhat analogous to the corresponding map for linkage. We have constructed and evaluated a cosmopolitan LD map by combining samples from a small number of populations using published data from a 10-megabase region on chromosome 20. In support of a pilot study, which examined a number of small genomic regions with a lower density of markers, we have found that a cosmopolitan map, which serves all populations when appropriately scaled, recovers 91 to 95 per cent of the information within population-specific maps. Recombination hot spots appear to have a dominant role in shaping patterns of LD. The success of the cosmopolitan map might be attributed to the co-localisation of hot spots in all populations. Although there must be finer scale differences between populations due to other processes (mutation, drift, selection, the results suggest that a whole-genome standard LD map would indeed be a useful resource for disease gene mapping.

  13. Nonparametric Detection of Geometric Structures Over Networks

    Science.gov (United States)

    Zou, Shaofeng; Liang, Yingbin; Poor, H. Vincent

    2017-10-01

    Nonparametric detection of existence of an anomalous structure over a network is investigated. Nodes corresponding to the anomalous structure (if one exists) receive samples generated by a distribution q, which is different from a distribution p generating samples for other nodes. If an anomalous structure does not exist, all nodes receive samples generated by p. It is assumed that the distributions p and q are arbitrary and unknown. The goal is to design statistically consistent tests with probability of errors converging to zero as the network size becomes asymptotically large. Kernel-based tests are proposed based on maximum mean discrepancy that measures the distance between mean embeddings of distributions into a reproducing kernel Hilbert space. Detection of an anomalous interval over a line network is first studied. Sufficient conditions on minimum and maximum sizes of candidate anomalous intervals are characterized in order to guarantee the proposed test to be consistent. It is also shown that certain necessary conditions must hold to guarantee any test to be universally consistent. Comparison of sufficient and necessary conditions yields that the proposed test is order-level optimal and nearly optimal respectively in terms of minimum and maximum sizes of candidate anomalous intervals. Generalization of the results to other networks is further developed. Numerical results are provided to demonstrate the performance of the proposed tests.

  14. Nonparametric Bayesian drift estimation for multidimensional stochastic differential equations

    NARCIS (Netherlands)

    Gugushvili, S.; Spreij, P.

    2014-01-01

    We consider nonparametric Bayesian estimation of the drift coefficient of a multidimensional stochastic differential equation from discrete-time observations on the solution of this equation. Under suitable regularity conditions, we establish posterior consistency in this context.

  15. Homothetic Efficiency and Test Power: A Non-Parametric Approach

    NARCIS (Netherlands)

    J. Heufer (Jan); P. Hjertstrand (Per)

    2015-01-01

    markdownabstract__Abstract__ We provide a nonparametric revealed preference approach to demand analysis based on homothetic efficiency. Homotheticity is a useful restriction but data rarely satisfies testable conditions. To overcome this we provide a way to estimate homothetic efficiency of

  16. A non-parametric approach to investigating fish population dynamics

    National Research Council Canada - National Science Library

    Cook, R.M; Fryer, R.J

    2001-01-01

    .... Using a non-parametric model for the stock-recruitment relationship it is possible to avoid defining specific functions relating recruitment to stock size while also providing a natural framework to model process error...

  17. From Enclave to Linkage Economies?

    DEFF Research Database (Denmark)

    Hansen, Michael W.

    and local enterprises by themselves will indeed produce linkages, the scope, depth and development impacts of linkages eventually depend on government intervention. Resource-rich African countries’ governments are aware of this and linkage promotion is increasingly becoming a key element...

  18. Applications of non-parametric statistics and analysis of variance on sample variances

    Science.gov (United States)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  19. On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests

    Directory of Open Access Journals (Sweden)

    Aaditya Ramdas

    2017-01-01

    Full Text Available Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. Inthisshortsurvey,wefocusonteststatisticsthatinvolvetheWassersteindistance. Usingan entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ plots and receiver operating characteristic or ordinal dominance (ROC/ODC curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.

  20. 非参数判别模型%Nonparametric discriminant model

    Institute of Scientific and Technical Information of China (English)

    谢斌锋; 梁飞豹

    2011-01-01

    提出了一类新的判别分析方法,主要思想是将非参数回归模型推广到判别分析中,形成相应的非参数判别模型.通过实例与传统判别法相比较,表明非参数判别法具有更广泛的适用性和较高的回代正确率.%In this paper, the author puts forth a new class of discriminant method, which the main idea is applied non- parametric regression model to discriminant analysis and forms the corresponding nonparametric discriminant model. Compared with the traditional discriminant methods by citing an example, the nonparametric discriminant method has more comprehensive adaptability and higher correct rate of back subsitution.

  1. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    Science.gov (United States)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  2. PV power forecast using a nonparametric PV model

    OpenAIRE

    Almeida, Marcelo Pinho; Perpiñan Lamigueiro, Oscar; Narvarte Fernández, Luis

    2015-01-01

    Forecasting the AC power output of a PV plant accurately is important both for plant owners and electric system operators. Two main categories of PV modeling are available: the parametric and the nonparametric. In this paper, a methodology using a nonparametric PV model is proposed, using as inputs several forecasts of meteorological variables from a Numerical Weather Forecast model, and actual AC power measurements of PV plants. The methodology was built upon the R environment and uses Quant...

  3. A genome-wide search for linkage of estimated glomerular filtration rate (eGFR in the Family Investigation of Nephropathy and Diabetes (FIND.

    Directory of Open Access Journals (Sweden)

    Farook Thameem

    Full Text Available OBJECTIVE: Estimated glomerular filtration rate (eGFR, a measure of kidney function, is heritable, suggesting that genes influence renal function. Genes that influence eGFR have been identified through genome-wide association studies. However, family-based linkage approaches may identify loci that explain a larger proportion of the heritability. This study used genome-wide linkage and association scans to identify quantitative trait loci (QTL that influence eGFR. METHODS: Genome-wide linkage and sparse association scans of eGFR were performed in families ascertained by probands with advanced diabetic nephropathy (DN from the multi-ethnic Family Investigation of Nephropathy and Diabetes (FIND study. This study included 954 African Americans (AA, 781 American Indians (AI, 614 European Americans (EA and 1,611 Mexican Americans (MA. A total of 3,960 FIND participants were genotyped for 6,000 single nucleotide polymorphisms (SNPs using the Illumina Linkage IVb panel. GFR was estimated by the Modification of Diet in Renal Disease (MDRD formula. RESULTS: The non-parametric linkage analysis, accounting for the effects of diabetes duration and BMI, identified the strongest evidence for linkage of eGFR on chromosome 20q11 (log of the odds [LOD] = 3.34; P = 4.4 × 10(-5 in MA and chromosome 15q12 (LOD = 2.84; P = 1.5 × 10(-4 in EA. In all subjects, the strongest linkage signal for eGFR was detected on chromosome 10p12 (P = 5.5 × 10(-4 at 44 cM near marker rs1339048. A subsequent association scan in both ancestry-specific groups and the entire population identified several SNPs significantly associated with eGFR across the genome. CONCLUSION: The present study describes the localization of QTL influencing eGFR on 20q11 in MA, 15q21 in EA and 10p12 in the combined ethnic groups participating in the FIND study. Identification of causal genes/variants influencing eGFR, within these linkage and association loci, will open new avenues for functional analyses

  4. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    Science.gov (United States)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  5. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    Science.gov (United States)

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2017-01-18

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Nonparametric estimation of quantum states, processes and measurements

    Science.gov (United States)

    Lougovski, Pavel; Bennink, Ryan

    Quantum state, process, and measurement estimation methods traditionally use parametric models, in which the number and role of relevant parameters is assumed to be known. When such an assumption cannot be justified, a common approach in many disciplines is to fit the experimental data to multiple models with different sets of parameters and utilize an information criterion to select the best fitting model. However, it is not always possible to assume a model with a finite (countable) number of parameters. This typically happens when there are unobserved variables that stem from hidden correlations that can only be unveiled after collecting experimental data. How does one perform quantum characterization in this situation? We present a novel nonparametric method of experimental quantum system characterization based on the Dirichlet Process (DP) that addresses this problem. Using DP as a prior in conjunction with Bayesian estimation methods allows us to increase model complexity (number of parameters) adaptively as the number of experimental observations grows. We illustrate our approach for the one-qubit case and show how a probability density function for an unknown quantum process can be estimated.

  7. CURRENCY LINKAGES AMONG ASEAN

    OpenAIRE

    CHIN LEE; M. Azali

    2010-01-01

    The purpose of this study is to examine the potential linkages among ASEAN-5 currencies, in particular the possibility of a Singapore dollar bloc during the pre- and post-crisis periods by using the Johansen multivariate cointegration test and the Granger causality test. Significant nonstationarity and the presence of unit roots were documented for each currency under both study periods. Using ASEAN-4 exchange rates against the Singapore dollar, the Johansen cointegration test showed that the...

  8. Non-parametric Bayesian human motion recognition using a single MEMS tri-axial accelerometer.

    Science.gov (United States)

    Ahmed, M Ejaz; Song, Ju Bin

    2012-09-27

    In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS) accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM) and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM) technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method.

  9. Non-Parametric Bayesian Human Motion Recognition Using a Single MEMS Tri-Axial Accelerometer

    Directory of Open Access Journals (Sweden)

    M. Ejaz Ahmed

    2012-09-01

    Full Text Available In this paper, we propose a non-parametric clustering method to recognize the number of human motions using features which are obtained from a single microelectromechanical system (MEMS accelerometer. Since the number of human motions under consideration is not known a priori and because of the unsupervised nature of the proposed technique, there is no need to collect training data for the human motions. The infinite Gaussian mixture model (IGMM and collapsed Gibbs sampler are adopted to cluster the human motions using extracted features. From the experimental results, we show that the unanticipated human motions are detected and recognized with significant accuracy, as compared with the parametric Fuzzy C-Mean (FCM technique, the unsupervised K-means algorithm, and the non-parametric mean-shift method.

  10. Functional-Coefficient Spatial Durbin Models with Nonparametric Spatial Weights: An Application to Economic Growth

    Directory of Open Access Journals (Sweden)

    Mustafa Koroglu

    2016-02-01

    Full Text Available This paper considers a functional-coefficient spatial Durbin model with nonparametric spatial weights. Applying the series approximation method, we estimate the unknown functional coefficients and spatial weighting functions via a nonparametric two-stage least squares (or 2SLS estimation method. To further improve estimation accuracy, we also construct a second-step estimator of the unknown functional coefficients by a local linear regression approach. Some Monte Carlo simulation results are reported to assess the finite sample performance of our proposed estimators. We then apply the proposed model to re-examine national economic growth by augmenting the conventional Solow economic growth convergence model with unknown spatial interactive structures of the national economy, as well as country-specific Solow parameters, where the spatial weighting functions and Solow parameters are allowed to be a function of geographical distance and the countries’ openness to trade, respectively.

  11. A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems

    Science.gov (United States)

    Merkatas, Christos; Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2017-06-01

    We propose a Bayesian nonparametric mixture model for the reconstruction and prediction from observed time series data, of discretized stochastic dynamical systems, based on Markov Chain Monte Carlo methods. Our results can be used by researchers in physical modeling interested in a fast and accurate estimation of low dimensional stochastic models when the size of the observed time series is small and the noise process (perhaps) is non-Gaussian. The inference procedure is demonstrated specifically in the case of polynomial maps of an arbitrary degree and when a Geometric Stick Breaking mixture process prior over the space of densities, is applied to the additive errors. Our method is parsimonious compared to Bayesian nonparametric techniques based on Dirichlet process mixtures, flexible and general. Simulations based on synthetic time series are presented.

  12. Testing Equality of Nonparametric Functions in Two Partially Linear Models%检验两个部分线性模型中非参函数相等

    Institute of Scientific and Technical Information of China (English)

    施三支; 宋立新; 杨华

    2008-01-01

    We propose the test statistic to check whether the nonparametric func-tions in two partially linear models are equality or not in this paper. We estimate the nonparametric function both in null hypothesis and the alternative by the local linear method, where we ignore the parametric components, and then estimate the parameters by the two stage method. The test statistic is derived, and it is shown to be asymptotically normal under the null hypothesis.

  13. Nonparametric identification of structural modifications in Laplace domain

    Science.gov (United States)

    Suwała, G.; Jankowski, Ł.

    2017-02-01

    This paper proposes and experimentally verifies a Laplace-domain method for identification of structural modifications, which (1) unlike time-domain formulations, allows the identification to be focused on these parts of the frequency spectrum that have a high signal-to-noise ratio, and (2) unlike frequency-domain formulations, decreases the influence of numerical artifacts related to the particular choice of the FFT exponential window decay. In comparison to the time-domain approach proposed earlier, advantages of the proposed method are smaller computational cost and higher accuracy, which leads to reliable performance in more difficult identification cases. Analytical formulas for the first- and second-order sensitivity analysis are derived. The approach is based on a reduced nonparametric model, which has the form of a set of selected structural impulse responses. Such a model can be collected purely experimentally, which obviates the need for design and laborious updating of a parametric model, such as a finite element model. The approach is verified experimentally using a 26-node lab 3D truss structure and 30 identification cases of a single mass modification or two concurrent mass modifications.

  14. Are cranial biomechanical simulation data linked to known diets in extant taxa? A method for applying diet-biomechanics linkage models to infer feeding capability of extinct species.

    Science.gov (United States)

    Tseng, Zhijie Jack; Flynn, John J

    2015-01-01

    Performance of the masticatory system directly influences feeding and survival, so adaptive hypotheses often are proposed to explain craniodental evolution via functional morphology changes. However, the prevalence of "many-to-one" association of cranial forms and functions in vertebrates suggests a complex interplay of ecological and evolutionary histories, resulting in redundant morphology-diet linkages. Here we examine the link between cranial biomechanical properties for taxa with different dietary preferences in crown clade Carnivora, the most diverse clade of carnivorous mammals. We test whether hypercarnivores and generalists can be distinguished based on cranial mechanical simulation models, and how such diet-biomechanics linkages relate to morphology. Comparative finite element and geometric morphometrics analyses document that predicted bite force is positively allometric relative to skull strain energy; this is achieved in part by increased stiffness in larger skull models and shape changes that resist deformation and displacement. Size-standardized strain energy levels do not reflect feeding preferences; instead, caniform models have higher strain energy than feliform models. This caniform-feliform split is reinforced by a sensitivity analysis using published models for six additional taxa. Nevertheless, combined bite force-strain energy curves distinguish hypercarnivorous versus generalist feeders. These findings indicate that the link between cranial biomechanical properties and carnivoran feeding preference can be clearly defined and characterized, despite phylogenetic and allometric effects. Application of this diet-biomechanics linkage model to an analysis of an extinct stem carnivoramorphan and an outgroup creodont species provides biomechanical evidence for the evolution of taxa into distinct hypercarnivorous and generalist feeding styles prior to the appearance of crown carnivoran clades with similar feeding preferences.

  15. Are cranial biomechanical simulation data linked to known diets in extant taxa? A method for applying diet-biomechanics linkage models to infer feeding capability of extinct species.

    Directory of Open Access Journals (Sweden)

    Zhijie Jack Tseng

    Full Text Available Performance of the masticatory system directly influences feeding and survival, so adaptive hypotheses often are proposed to explain craniodental evolution via functional morphology changes. However, the prevalence of "many-to-one" association of cranial forms and functions in vertebrates suggests a complex interplay of ecological and evolutionary histories, resulting in redundant morphology-diet linkages. Here we examine the link between cranial biomechanical properties for taxa with different dietary preferences in crown clade Carnivora, the most diverse clade of carnivorous mammals. We test whether hypercarnivores and generalists can be distinguished based on cranial mechanical simulation models, and how such diet-biomechanics linkages relate to morphology. Comparative finite element and geometric morphometrics analyses document that predicted bite force is positively allometric relative to skull strain energy; this is achieved in part by increased stiffness in larger skull models and shape changes that resist deformation and displacement. Size-standardized strain energy levels do not reflect feeding preferences; instead, caniform models have higher strain energy than feliform models. This caniform-feliform split is reinforced by a sensitivity analysis using published models for six additional taxa. Nevertheless, combined bite force-strain energy curves distinguish hypercarnivorous versus generalist feeders. These findings indicate that the link between cranial biomechanical properties and carnivoran feeding preference can be clearly defined and characterized, despite phylogenetic and allometric effects. Application of this diet-biomechanics linkage model to an analysis of an extinct stem carnivoramorphan and an outgroup creodont species provides biomechanical evidence for the evolution of taxa into distinct hypercarnivorous and generalist feeding styles prior to the appearance of crown carnivoran clades with similar feeding preferences.

  16. Privacy-preserving record linkage on large real world datasets.

    Science.gov (United States)

    Randall, Sean M; Ferrante, Anna M; Boyd, James H; Bauer, Jacqueline K; Semmens, James B

    2014-08-01

    Record linkage typically involves the use of dedicated linkage units who are supplied with personally identifying information to determine individuals from within and across datasets. The personally identifying information supplied to linkage units is separated from clinical information prior to release by data custodians. While this substantially reduces the risk of disclosure of sensitive information, some residual risks still exist and remain a concern for some custodians. In this paper we trial a method of record linkage which reduces privacy risk still further on large real world administrative data. The method uses encrypted personal identifying information (bloom filters) in a probability-based linkage framework. The privacy preserving linkage method was tested on ten years of New South Wales (NSW) and Western Australian (WA) hospital admissions data, comprising in total over 26 million records. No difference in linkage quality was found when the results were compared to traditional probabilistic methods using full unencrypted personal identifiers. This presents as a possible means of reducing privacy risks related to record linkage in population level research studies. It is hoped that through adaptations of this method or similar privacy preserving methods, risks related to information disclosure can be reduced so that the benefits of linked research taking place can be fully realised. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Stahel-Donoho kernel estimation for fixed design nonparametric regression models

    Institute of Scientific and Technical Information of China (English)

    LIN; Lu

    2006-01-01

    This paper reports a robust kernel estimation for fixed design nonparametric regression models.A Stahel-Donoho kernel estimation is introduced,in which the weight functions depend on both the depths of data and the distances between the design points and the estimation points.Based on a local approximation,a computational technique is given to approximate to the incomputable depths of the errors.As a result the new estimator is computationally efficient.The proposed estimator attains a high breakdown point and has perfect asymptotic behaviors such as the asymptotic normality and convergence in the mean squared error.Unlike the depth-weighted estimator for parametric regression models,this depth-weighted nonparametric estimator has a simple variance structure and then we can compare its efficiency with the original one.Some simulations show that the new method can smooth the regression estimation and achieve some desirable balances between robustness and efficiency.

  18. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  19. The Utility of Nonparametric Transformations for Imputation of Survey Data

    Directory of Open Access Journals (Sweden)

    Robbins Michael W.

    2014-12-01

    Full Text Available Missing values present a prevalent problem in the analysis of establishment survey data. Multivariate imputation algorithms (which are used to fill in missing observations tend to have the common limitation that imputations for continuous variables are sampled from Gaussian distributions. This limitation is addressed here through the use of robust marginal transformations. Specifically, kernel-density and empirical distribution-type transformations are discussed and are shown to have favorable properties when used for imputation of complex survey data. Although such techniques have wide applicability (i.e., they may be easily applied in conjunction with a wide array of imputation techniques, the proposed methodology is applied here with an algorithm for imputation in the USDA’s Agricultural Resource Management Survey. Data analysis and simulation results are used to illustrate the specific advantages of the robust methods when compared to the fully parametric techniques and to other relevant techniques such as predictive mean matching. To summarize, transformations based upon parametric densities are shown to distort several data characteristics in circumstances where the parametric model is ill fit; however, no circumstances are found in which the transformations based upon parametric models outperform the nonparametric transformations. As a result, the transformation based upon the empirical distribution (which is the most computationally efficient is recommended over the other transformation procedures in practice.

  20. A New Non-Parametric Approach to Galaxy Morphological Classification

    CERN Document Server

    Lotz, J M; Madau, P; Lotz, Jennifer M.; Primack, Joel; Madau, Piero

    2003-01-01

    We present two new non-parametric methods for quantifying galaxy morphology: the relative distribution of the galaxy pixel flux values (the Gini coefficient or G) and the second-order moment of the brightest 20% of the galaxy's flux (M20). We test the robustness of G and M20 to decreasing signal-to-noise and spatial resolution, and find that both measures are reliable to within 10% at average signal-to-noise per pixel greater than 3 and resolutions better than 1000 pc and 500 pc, respectively. We have measured G and M20, as well as concentration (C), asymmetry (A), and clumpiness (S) in the rest-frame near-ultraviolet/optical wavelengths for 150 bright local "normal" Hubble type galaxies (E-Sd) galaxies and 104 0.05 < z < 0.25 ultra-luminous infrared galaxies (ULIRGs).We find that most local galaxies follow a tight sequence in G-M20-C, where early-types have high G and C and low M20 and late-type spirals have lower G and C and higher M20. The majority of ULIRGs lie above the normal galaxy G-M20 sequence...

  1. Nonparametric Bayes modeling for case control studies with many predictors.

    Science.gov (United States)

    Zhou, Jing; Herring, Amy H; Bhattacharya, Anirban; Olshan, Andrew F; Dunson, David B

    2016-03-01

    It is common in biomedical research to run case-control studies involving high-dimensional predictors, with the main goal being detection of the sparse subset of predictors having a significant association with disease. Usual analyses rely on independent screening, considering each predictor one at a time, or in some cases on logistic regression assuming no interactions. We propose a fundamentally different approach based on a nonparametric Bayesian low rank tensor factorization model for the retrospective likelihood. Our model allows a very flexible structure in characterizing the distribution of multivariate variables as unknown and without any linear assumptions as in logistic regression. Predictors are excluded only if they have no impact on disease risk, either directly or through interactions with other predictors. Hence, we obtain an omnibus approach for screening for important predictors. Computation relies on an efficient Gibbs sampler. The methods are shown to have high power and low false discovery rates in simulation studies, and we consider an application to an epidemiology study of birth defects.

  2. Adaptive Neural Network Nonparametric Identifier With Normalized Learning Laws.

    Science.gov (United States)

    Chairez, Isaac

    2016-04-05

    This paper addresses the design of a normalized convergent learning law for neural networks (NNs) with continuous dynamics. The NN is used here to obtain a nonparametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties is the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on normalized algorithms was used to adjust the weights of the NN. The adaptive algorithm was derived by means of a nonstandard logarithmic Lyapunov function (LLF). Two identifiers were designed using two variations of LLFs leading to a normalized learning law for the first identifier and a variable gain normalized learning law. In the case of the second identifier, the inclusion of normalized learning laws yields to reduce the size of the convergence region obtained as solution of the practical stability analysis. On the other hand, the velocity of convergence for the learning laws depends on the norm of errors in inverse form. This fact avoids the peaking transient behavior in the time evolution of weights that accelerates the convergence of identification error. A numerical example demonstrates the improvements achieved by the algorithm introduced in this paper compared with classical schemes with no-normalized continuous learning methods. A comparison of the identification performance achieved by the no-normalized identifier and the ones developed in this paper shows the benefits of the learning law proposed in this paper.

  3. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  4. Nonparametric variance estimation in the analysis of microarray data: a measurement error approach.

    Science.gov (United States)

    Carroll, Raymond J; Wang, Yuedong

    2008-01-01

    This article investigates the effects of measurement error on the estimation of nonparametric variance functions. We show that either ignoring measurement error or direct application of the simulation extrapolation, SIMEX, method leads to inconsistent estimators. Nevertheless, the direct SIMEX method can reduce bias relative to a naive estimator. We further propose a permutation SIMEX method which leads to consistent estimators in theory. The performance of both SIMEX methods depends on approximations to the exact extrapolants. Simulations show that both SIMEX methods perform better than ignoring measurement error. The methodology is illustrated using microarray data from colon cancer patients.

  5. Asymptotic theory of nonparametric regression estimates with censored data

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    For regression analysis, some useful information may have been lost when the responses are right censored. To estimate nonparametric functions, several estimates based on censored data have been proposed and their consistency and convergence rates have been studied in literature, but the optimal rates of global convergence have not been obtained yet. Because of the possible information loss, one may think that it is impossible for an estimate based on censored data to achieve the optimal rates of global convergence for nonparametric regression, which were established by Stone based on complete data. This paper constructs a regression spline estimate of a general nonparametric regression function based on right_censored response data, and proves, under some regularity conditions, that this estimate achieves the optimal rates of global convergence for nonparametric regression. Since the parameters for the nonparametric regression estimate have to be chosen based on a data driven criterion, we also obtain the asymptotic optimality of AIC, AICC, GCV, Cp and FPE criteria in the process of selecting the parameters.

  6. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    Science.gov (United States)

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.

  7. A comparison of nonparametric estimators of survival under left-truncation and right-censoring motivated by a case study

    Directory of Open Access Journals (Sweden)

    Mauro Gasparini

    2013-05-01

    Full Text Available We present an application of nonparametric estimation of survival in the presence of left-truncated and right-censored data. We confirm the well-known unstable behavior of the survival estimates when the risk set is small and there are too few early deaths. How ever, in our real scenario where only few death times are necessarily available, the proper nonparametric maximum likelihood estimator, and its usual modification, behave less badly than alternative methods proposed in the literature. The relative merits of the different estimators are discussed in a simulation study extending the settings of the case study to more general scenarios.

  8. Bayesian Nonparametric Estimation for Dynamic Treatment Regimes with Sequential Transition Times.

    Science.gov (United States)

    Xu, Yanxun; Müller, Peter; Wahed, Abdus S; Thall, Peter F

    2016-01-01

    We analyze a dataset arising from a clinical trial involving multi-stage chemotherapy regimes for acute leukemia. The trial design was a 2 × 2 factorial for frontline therapies only. Motivated by the idea that subsequent salvage treatments affect survival time, we model therapy as a dynamic treatment regime (DTR), that is, an alternating sequence of adaptive treatments or other actions and transition times between disease states. These sequences may vary substantially between patients, depending on how the regime plays out. To evaluate the regimes, mean overall survival time is expressed as a weighted average of the means of all possible sums of successive transitions times. We assume a Bayesian nonparametric survival regression model for each transition time, with a dependent Dirichlet process prior and Gaussian process base measure (DDP-GP). Posterior simulation is implemented by Markov chain Monte Carlo (MCMC) sampling. We provide general guidelines for constructing a prior using empirical Bayes methods. The proposed approach is compared with inverse probability of treatment weighting, including a doubly robust augmented version of this approach, for both single-stage and multi-stage regimes with treatment assignment depending on baseline covariates. The simulations show that the proposed nonparametric Bayesian approach can substantially improve inference compared to existing methods. An R program for implementing the DDP-GP-based Bayesian nonparametric analysis is freely available at https://www.ma.utexas.edu/users/yxu/.

  9. On the Choice of Difference Sequence in a Unified Framework for Variance Estimation in Nonparametric Regression

    KAUST Repository

    Dai, Wenlin

    2017-09-01

    Difference-based methods do not require estimating the mean function in nonparametric regression and are therefore popular in practice. In this paper, we propose a unified framework for variance estimation that combines the linear regression method with the higher-order difference estimators systematically. The unified framework has greatly enriched the existing literature on variance estimation that includes most existing estimators as special cases. More importantly, the unified framework has also provided a smart way to solve the challenging difference sequence selection problem that remains a long-standing controversial issue in nonparametric regression for several decades. Using both theory and simulations, we recommend to use the ordinary difference sequence in the unified framework, no matter if the sample size is small or if the signal-to-noise ratio is large. Finally, to cater for the demands of the application, we have developed a unified R package, named VarED, that integrates the existing difference-based estimators and the unified estimators in nonparametric regression and have made it freely available in the R statistical program http://cran.r-project.org/web/packages/.

  10. Non-parametric combination and related permutation tests for neuroimaging.

    Science.gov (United States)

    Winkler, Anderson M; Webster, Matthew A; Brooks, Jonathan C; Tracey, Irene; Smith, Stephen M; Nichols, Thomas E

    2016-04-01

    In this work, we show how permutation methods can be applied to combination analyses such as those that include multiple imaging modalities, multiple data acquisitions of the same modality, or simply multiple hypotheses on the same data. Using the well-known definition of union-intersection tests and closed testing procedures, we use synchronized permutations to correct for such multiplicity of tests, allowing flexibility to integrate imaging data with different spatial resolutions, surface and/or volume-based representations of the brain, including non-imaging data. For the problem of joint inference, we propose and evaluate a modification of the recently introduced non-parametric combination (NPC) methodology, such that instead of a two-phase algorithm and large data storage requirements, the inference can be performed in a single phase, with reasonable computational demands. The method compares favorably to classical multivariate tests (such as MANCOVA), even when the latter is assessed using permutations. We also evaluate, in the context of permutation tests, various combining methods that have been proposed in the past decades, and identify those that provide the best control over error rate and power across a range of situations. We show that one of these, the method of Tippett, provides a link between correction for the multiplicity of tests and their combination. Finally, we discuss how the correction can solve certain problems of multiple comparisons in one-way ANOVA designs, and how the combination is distinguished from conjunctions, even though both can be assessed using permutation tests. We also provide a common algorithm that accommodates combination and correction.

  11. 基于铰链四杆机构运动学的解析法及ADAMS仿真%KINEMATIC ANALYSIS OF THE PLANAR FOUR BAR LINKAGE MECHANISM BASED ON ANALYTIC METHOD AND ADAMS SIMULATION

    Institute of Scientific and Technical Information of China (English)

    何伟; 李震; 王建彬; 张建华

    2011-01-01

    At present, the main method for kinematic analysis of the Planar Four Bar Linkage Mechanism are analytic method and ADAMS simulation. Both of them can get the motion law of the machinery. On the one hand this paper uses the analytical method to deduce the motion law of the machinery, and solve it by MATLAB, then plot the motion of linkage and racker. On the other hand, uses ADAMS simulation to compare the results.%目前研究铰链四杆机构运动规律的主要方法有解析法和ADAMS仿真,这两种方法都能得到该机构的运动规律,下面将采用解析法中的复数矢量法,对曲柄角速度恒定的铰链四杆机构的运动规律进行推导,并采用MATLAB求解,绘出连杆和摇杆的运动规律图;同时通过ADAMS仿真进行对比验证。从而比较两种方法的结果.

  12. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  13. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  14. A genome-wide linkage study of bipolar disorder and co-morbid migraine

    DEFF Research Database (Denmark)

    Oedegaard, K. J.; Greenwood, T. A.; Lunde, Asger

    2010-01-01

    on chromosome 4q24 for migraine (but not BPAD) with a peak LOD of 2.26. This region has previously been implicated in two independent migraine linkage studies. In additionwe identified a locus on chromosome 20p11 with overlapping elevated LOD scores for both migraine (LOD=1.95) and BPAD (LOD=1.67) phenotypes...... Genetics Initiative wave 4 data set. In this analysis we selected only those families in which at least two members were diagnosed with migraine by a doctor according to patients' reports. Nonparametric linkage analysis performed on 31 families segregating both BPAD and migraine identified a linkage signal...... osome 4 (not co-segregating with BPAD) in a sample of BPAD families with comorbid migraine, and suggest a susceptibility locus on chromosome 20, harboring a gene for the migraine/BPAD phenotype. Together these data suggest that some genes may predispose to both bipolar disorder and migraine....

  15. Nonparametric estimation of a convex bathtub-shaped hazard function.

    Science.gov (United States)

    Jankowski, Hanna K; Wellner, Jon A

    2009-11-01

    In this paper, we study the nonparametric maximum likelihood estimator (MLE) of a convex hazard function. We show that the MLE is consistent and converges at a local rate of n(2/5) at points x(0) where the true hazard function is positive and strictly convex. Moreover, we establish the pointwise asymptotic distribution theory of our estimator under these same assumptions. One notable feature of the nonparametric MLE studied here is that no arbitrary choice of tuning parameter (or complicated data-adaptive selection of the tuning parameter) is required.

  16. t-tests, non-parametric tests, and large studies—a paradox of statistical practice?

    Directory of Open Access Journals (Sweden)

    Fagerland Morten W

    2012-06-01

    Full Text Available Abstract Background During the last 30 years, the median sample size of research studies published in high-impact medical journals has increased manyfold, while the use of non-parametric tests has increased at the expense of t-tests. This paper explores this paradoxical practice and illustrates its consequences. Methods A simulation study is used to compare the rejection rates of the Wilcoxon-Mann-Whitney (WMW test and the two-sample t-test for increasing sample size. Samples are drawn from skewed distributions with equal means and medians but with a small difference in spread. A hypothetical case study is used for illustration and motivation. Results The WMW test produces, on average, smaller p-values than the t-test. This discrepancy increases with increasing sample size, skewness, and difference in spread. For heavily skewed data, the proportion of p Conclusions Non-parametric tests are most useful for small studies. Using non-parametric tests in large studies may provide answers to the wrong question, thus confusing readers. For studies with a large sample size, t-tests and their corresponding confidence intervals can and should be used even for heavily skewed data.

  17. Non-parametric foreground subtraction for 21cm epoch of reionization experiments

    CERN Document Server

    Harker, Geraint; Bernardi, Gianni; Brentjens, Michiel A; De Bruyn, A G; Ciardi, Benedetta; Jelic, Vibor; Koopmans, Leon V E; Labropoulos, Panagiotis; Mellema, Garrelt; Offringa, Andre; Pandey, V N; Schaye, Joop; Thomas, Rajat M; Yatawatta, Sarod

    2009-01-01

    An obstacle to the detection of redshifted 21cm emission from the epoch of reionization (EoR) is the presence of foregrounds which exceed the cosmological signal in intensity by orders of magnitude. We argue that in principle it would be better to fit the foregrounds non-parametrically - allowing the data to determine their shape - rather than selecting some functional form in advance and then fitting its parameters. Non-parametric fits often suffer from other problems, however. We discuss these before suggesting a non-parametric method, Wp smoothing, which seems to avoid some of them. After outlining the principles of Wp smoothing we describe an algorithm used to implement it. We then apply Wp smoothing to a synthetic data cube for the LOFAR EoR experiment. The performance of Wp smoothing, measured by the extent to which it is able to recover the variance of the cosmological signal and to which it avoids leakage of power from the foregrounds, is compared to that of a parametric fit, and to another non-parame...

  18. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering

    Science.gov (United States)

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  19. Nonparametric estimation of the heterogeneity of a random medium using Compound Poisson Process modeling of wave multiple scattering

    CERN Document Server

    Bihan, Nicolas Le

    2009-01-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using Compound Poisson Processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  20. Application of a nonparametric approach to analyze delta-pCO2 data from the Southern Ocean

    CSIR Research Space (South Africa)

    Pretorius, WB

    2011-11-01

    Full Text Available In this paper the authors discuss the application of a classical nonparametric inference approach to analyse delta-pCO2 measurements from the Southern Ocean, which is a novel method to analysing data in this area, as well as comparing results...

  1. Measuring the price responsiveness of gasoline demand: economic shape restrictions and nonparametric demand estimation

    OpenAIRE

    Blundell, Richard; Horowitz, Joel L.; Parey, Matthias

    2011-01-01

    This paper develops a new method for estimating a demand function and the welfare consequences of price changes. The method is applied to gasoline demand in the U.S. and is applicable to other goods. The method uses shape restrictions derived from economic theory to improve the precision of a nonparametric estimate of the demand function. Using data from the U.S. National Household Travel Survey, we show that the restrictions are consistent with the data on gasoline demand and remove the anom...

  2. Linkage analysis of quantitative refraction and refractive errors in the Beaver Dam Eye Study.

    Science.gov (United States)

    Klein, Alison P; Duggal, Priya; Lee, Kristine E; Cheng, Ching-Yu; Klein, Ronald; Bailey-Wilson, Joan E; Klein, Barbara E K

    2011-07-13

    Refraction, as measured by spherical equivalent, is the need for an external lens to focus images on the retina. While genetic factors play an important role in the development of refractive errors, few susceptibility genes have been identified. However, several regions of linkage have been reported for myopia (2q, 4q, 7q, 12q, 17q, 18p, 22q, and Xq) and for quantitative refraction (1p, 3q, 4q, 7p, 8p, and 11p). To replicate previously identified linkage peaks and to identify novel loci that influence quantitative refraction and refractive errors, linkage analysis of spherical equivalent, myopia, and hyperopia in the Beaver Dam Eye Study was performed. Nonparametric, sibling-pair, genome-wide linkage analyses of refraction (spherical equivalent adjusted for age, education, and nuclear sclerosis), myopia and hyperopia in 834 sibling pairs within 486 extended pedigrees were performed. Suggestive evidence of linkage was found for hyperopia on chromosome 3, region q26 (empiric P = 5.34 × 10(-4)), a region that had shown significant genome-wide evidence of linkage to refraction and some evidence of linkage to hyperopia. In addition, the analysis replicated previously reported genome-wide significant linkages to 22q11 of adjusted refraction and myopia (empiric P = 4.43 × 10(-3) and 1.48 × 10(-3), respectively) and to 7p15 of refraction (empiric P = 9.43 × 10(-4)). Evidence was also found of linkage to refraction on 7q36 (empiric P = 2.32 × 10(-3)), a region previously linked to high myopia. The findings provide further evidence that genes controlling refractive errors are located on 3q26, 7p15, 7p36, and 22q11.

  3. Nonparametric Cointegration Analysis of Fractional Systems With Unknown Integration Orders

    DEFF Research Database (Denmark)

    Nielsen, Morten Ørregaard

    2009-01-01

    In this paper a nonparametric variance ratio testing approach is proposed for determining the number of cointegrating relations in fractionally integrated systems. The test statistic is easily calculated without prior knowledge of the integration order of the data, the strength of the cointegrating...

  4. Non-parametric analysis of rating transition and default data

    DEFF Research Database (Denmark)

    Fledelius, Peter; Lando, David; Perch Nielsen, Jens

    2004-01-01

    We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move b...... but that this dependence vanishes after 2-3 years....

  5. Influence of test and person characteristics on nonparametric appropriateness measurement

    NARCIS (Netherlands)

    Meijer, Rob R.; Molenaar, Ivo W.; Sijtsma, Klaas

    1994-01-01

    Appropriateness measurement in nonparametric item response theory modeling is affected by the reliability of the items, the test length, the type of aberrant response behavior, and the percentage of aberrant persons in the group. The percentage of simulees defined a priori as aberrant responders tha

  6. Influence of Test and Person Characteristics on Nonparametric Appropriateness Measurement

    NARCIS (Netherlands)

    Meijer, Rob R; Molenaar, Ivo W; Sijtsma, Klaas

    1994-01-01

    Appropriateness measurement in nonparametric item response theory modeling is affected by the reliability of the items, the test length, the type of aberrant response behavior, and the percentage of aberrant persons in the group. The percentage of simulees defined a priori as aberrant responders tha

  7. Estimation of Spatial Dynamic Nonparametric Durbin Models with Fixed Effects

    Science.gov (United States)

    Qian, Minghui; Hu, Ridong; Chen, Jianwei

    2016-01-01

    Spatial panel data models have been widely studied and applied in both scientific and social science disciplines, especially in the analysis of spatial influence. In this paper, we consider the spatial dynamic nonparametric Durbin model (SDNDM) with fixed effects, which takes the nonlinear factors into account base on the spatial dynamic panel…

  8. Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series

    DEFF Research Database (Denmark)

    Gao, Jiti; Kanaya, Shin; Li, Degui

    2015-01-01

    This paper establishes uniform consistency results for nonparametric kernel density and regression estimators when time series regressors concerned are nonstationary null recurrent Markov chains. Under suitable regularity conditions, we derive uniform convergence rates of the estimators. Our...... results can be viewed as a nonstationary extension of some well-known uniform consistency results for stationary time series....

  9. Non-parametric Bayesian inference for inhomogeneous Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    With reference to a specific data set, we consider how to perform a flexible non-parametric Bayesian analysis of an inhomogeneous point pattern modelled by a Markov point process, with a location dependent first order term and pairwise interaction only. A priori we assume that the first order term...

  10. Investigating the cultural patterns of corruption: A nonparametric analysis

    OpenAIRE

    Halkos, George; Tzeremes, Nickolaos

    2011-01-01

    By using a sample of 77 countries our analysis applies several nonparametric techniques in order to reveal the link between national culture and corruption. Based on Hofstede’s cultural dimensions and the corruption perception index, the results reveal that countries with higher levels of corruption tend to have higher power distance and collectivism values in their society.

  11. Coverage Accuracy of Confidence Intervals in Nonparametric Regression

    Institute of Scientific and Technical Information of China (English)

    Song-xi Chen; Yong-song Qin

    2003-01-01

    Point-wise confidence intervals for a nonparametric regression function with random design points are considered. The confidence intervals are those based on the traditional normal approximation and the empirical likelihood. Their coverage accuracy is assessed by developing the Edgeworth expansions for the coverage probabilities. It is shown that the empirical likelihood confidence intervals are Bartlett correctable.

  12. Homothetic Efficiency and Test Power: A Non-Parametric Approach

    NARCIS (Netherlands)

    J. Heufer (Jan); P. Hjertstrand (Per)

    2015-01-01

    markdownabstract__Abstract__ We provide a nonparametric revealed preference approach to demand analysis based on homothetic efficiency. Homotheticity is a useful restriction but data rarely satisfies testable conditions. To overcome this we provide a way to estimate homothetic efficiency of consump

  13. Non-parametric analysis of rating transition and default data

    DEFF Research Database (Denmark)

    Fledelius, Peter; Lando, David; Perch Nielsen, Jens

    2004-01-01

    We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move...

  14. Nonparametric Bayesian inference of the microcanonical stochastic block model

    Science.gov (United States)

    Peixoto, Tiago P.

    2017-01-01

    A principled approach to characterize the hidden modular structure of networks is to formulate generative models and then infer their parameters from data. When the desired structure is composed of modules or "communities," a suitable choice for this task is the stochastic block model (SBM), where nodes are divided into groups, and the placement of edges is conditioned on the group memberships. Here, we present a nonparametric Bayesian method to infer the modular structure of empirical networks, including the number of modules and their hierarchical organization. We focus on a microcanonical variant of the SBM, where the structure is imposed via hard constraints, i.e., the generated networks are not allowed to violate the patterns imposed by the model. We show how this simple model variation allows simultaneously for two important improvements over more traditional inference approaches: (1) deeper Bayesian hierarchies, with noninformative priors replaced by sequences of priors and hyperpriors, which not only remove limitations that seriously degrade the inference on large networks but also reveal structures at multiple scales; (2) a very efficient inference algorithm that scales well not only for networks with a large number of nodes and edges but also with an unlimited number of modules. We show also how this approach can be used to sample modular hierarchies from the posterior distribution, as well as to perform model selection. We discuss and analyze the differences between sampling from the posterior and simply finding the single parameter estimate that maximizes it. Furthermore, we expose a direct equivalence between our microcanonical approach and alternative derivations based on the canonical SBM.

  15. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    Science.gov (United States)

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  16. Nonparametric model reconstruction for stochastic differential equations from discretely observed time-series data.

    Science.gov (United States)

    Ohkubo, Jun

    2011-12-01

    A scheme is developed for estimating state-dependent drift and diffusion coefficients in a stochastic differential equation from time-series data. The scheme does not require to specify parametric forms for the drift and diffusion coefficients in advance. In order to perform the nonparametric estimation, a maximum likelihood method is combined with a concept based on a kernel density estimation. In order to deal with discrete observation or sparsity of the time-series data, a local linearization method is employed, which enables a fast estimation.

  17. Two new non-parametric tests to the distance duality relation with galaxy clusters

    CERN Document Server

    Costa, S S; Holanda, R F L

    2015-01-01

    The cosmic distance duality relation is a milestone of cosmology involving the luminosity and angular diameter distances. Any departure of the relation points to new physics or systematic errors in the observations, therefore tests of the relation are extremely important to build a consistent cosmological framework. Here, two new tests are proposed based on galaxy clusters observations (angular diameter distance and gas mass fraction) and $H(z)$ measurements. By applying Gaussian Processes, a non-parametric method, we are able to derive constraints on departures of the relation where no evidence of deviation is found in both methods, reinforcing the cosmological and astrophysical hypotheses adopted so far.

  18. A sequential nonparametric pattern classification algorithm based on the Wald SPRT. [Sequential Probability Ratio Test

    Science.gov (United States)

    Poage, J. L.

    1975-01-01

    A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.

  19. BOOTSTRAP WAVELET IN THE NONPARAMETRIC REGRESSION MODEL WITH WEAKLY DEPENDENT PROCESSES

    Institute of Scientific and Technical Information of China (English)

    林路; 张润楚

    2004-01-01

    This paper introduces a method of bootstrap wavelet estimation in a nonparametric regression model with weakly dependent processes for both fixed and random designs. The asymptotic bounds for the bias and variance of the bootstrap wavelet estimators are given in the fixed design model. The conditional normality for a modified version of the bootstrap wavelet estimators is obtained in the fixed model. The consistency for the bootstrap wavelet estimator is also proved in the random design model. These results show that the bootstrap wavelet method is valid for the model with weakly dependent processes.

  20. Groebner bases via linkage

    CERN Document Server

    Gorla, Elisa; Nagel, Uwe

    2010-01-01

    In this paper, we give a sufficient condition for a set $\\mathal G$ of polynomials to be a Gr\\"obner basis with respect to a given term-order for the ideal $I$ that it generates. Our criterion depends on the linkage pattern of the ideal $I$ and of the ideal generated by the initial terms of the elements of $\\mathcal G$. We then apply this criterion to ideals generated by minors and pfaffians. More precisely, we consider large families of ideals generated by minors or pfaffians in a matrix or a ladder, where the size of the minors or pfaffians is allowed to vary in different regions of the matrix or the ladder. We use the sufficient condition that we established to prove that the minors or pfaffians form a reduced Gr\\"obner basis for the ideal that they generate, with respect to any diagonal or anti-diagonal term-order. We also show that the corresponding initial ideal is Cohen-Macaulay. Our proof relies on known results in liaison theory, combined with a simple Hilbert function computation. In particular, our...

  1. A Genome-Wide SNP Linkage Analysis Suggests a Susceptibility Locus on 6p21 for Ankylosing Spondylitis and Inflammatory Back Pain Trait

    Science.gov (United States)

    Zhang, Yanli; Liao, Zetao; Wei, Qiujing; Pan, Yunfeng; Wang, Xinwei; Cao, Shuangyan; Guo, Zishi; Wu, Yuqiong; Rong, Ju; Jin, Ou; Xu, Manlong; Gu, Jieruo

    2016-01-01

    Objectives To screen susceptibility loci for ankylosing spondylitis (AS) using an affected-only linkage analysis based on high-density single nucleotide polymorphisms (SNPs) in a genome-wide manner. Patients and Methods AS patients from ten families with Cantonese origin of China were enrolled in the study. Blood samples were genotyped using genomic DNA derived from peripheral blood leukocytes by Illumina HumanHap 610-Quad SNP Chip. Genotype data were generated using the Illumina BeadStudio 3.2 software. PLINK package was used to remove non-autosomal SNPs and to further eliminate markers of typing errors. An affected-only linkage analysis was carried out using both non-parametric and parametric linkage analyses, as implemented in MERLIN. Result Seventy-eight AS patients (48 males and 30 females, mean age: 39±16 years) were enrolled in the study. The mean age of onset was 23±10 years and mean duration of disease was 16.7±12.2 years. Iritis (2/76, 2.86%), dactylitis (5/78, 6.41%), hip joint involvement (9/78, 11.54%), peripheral arthritis (22/78, 28.21%), inflammatory back pain (IBP) (69/78, 88.46%) and HLA-B27 positivity (70/78, 89.74%) were observed in these patients. Using non-parameter linkage analysis, we found one susceptibility locus for AS, IBP and HLA-B27 in 6p21 respectively, spanning about 13.5Mb, 20.9Mb and 21.2Mb, respectively No significant results were found in the other clinical trait groups including dactylitis, hip involved and arthritis. The identical susceptibility locus region spanning above 9.44Mb was detected in AS IBP and HLA-B27 by the parametric linkage analysis. Conclusion Our genome-wide SNP linkage analysis in ten families with ankylosing spondylitis suggests a susceptibility locus on 6p21 in AS, which is a risk locus for IBP in AS patients. PMID:27973620

  2. Comparison of Rank Analysis of Covariance and Nonparametric Randomized Blocks Analysis.

    Science.gov (United States)

    Porter, Andrew C.; McSweeney, Maryellen

    The relative power of three possible experimental designs under the condition that data is to be analyzed by nonparametric techniques; the comparison of the power of each nonparametric technique to its parametric analogue; and the comparison of relative powers using nonparametric and parametric techniques are discussed. The three nonparametric…

  3. Nonparametric Bayes inference for concave distribution functions

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Lauritzen, Steffen Lilholt

    2002-01-01

    Bayesian inference for concave distribution functions is investigated. This is made by transforming a mixture of Dirichlet processes on the space of distribution functions to the space of concave distribution functions. We give a method for sampling from the posterior distribution using a Pólya urn...

  4. Kinematic and Dynamic Characteristics Analysis of Bennett’ s Linkage

    Institute of Scientific and Technical Information of China (English)

    Jianfeng Li

    2015-01-01

    Bennett’ s linkage is a spatial fourlink linkage, and has an extensive application prospect in the deployable linkages.Its kinematic and dynamic characteristics analysis has a great significance in its synthesis and application. According to the geometrical conditions of Bennett ’ s linkage, the motion equations are established,and the expressions of angular displacement, angular velocity and angular acceleration of the followers and the displacement, velocity and acceleration of mass center of link are shown. Based on Lagrange’ s equation, the multi⁃rigid⁃body dynamic model of Bennett’ s linkage is established. In order to solve the reaction forces and moments of joint, screw theory and reciprocal screw method are combined to establish the computing method.The number of equations and unknown reaction forces and moments of joint are equal through adding link deformation equations. The influence of the included angle of adjacent axes on Bennett ’ s linkage ’ s kinematic characteristics, the dynamic characteristics and the reaction forces and moments of joint are analyzed. Results show that the included angle of adjacent axes has a great effect on velocity, acceleration, the reaction forces and moments of Bennett’ s linkage. The change of reaction forces and moments of joint are apparent near the singularity configuration.

  5. Emergency Linkage Mode of Power Enterprise

    Directory of Open Access Journals (Sweden)

    Feng Jie

    2016-01-01

    Full Text Available Power emergency disposal needs take full advantage of the power enterprise within the external emergency power and resources. Based on analyzing and summarizing the relevant experience of domestic and foreign emergency linkage, this paper draws the Emergency Linkage subjects, Emergency Linkage contents, Emergency Linkage level, which are three key elements if power enterprise Emergency Linkage. Emergency Linkage subjects are divided into the two types of inner subjects and the external body; Emergency Linkage contents are in accordance with four phases of prevention, preparedness, response and recovery; Emergency Linkage level is divided into three levels of enterprise headquarter, provincial enterprise and incident unite. Binding power enterprise emergency management practice, this paper studies the internal Emergency Linkage modes (including horizontal mode and vertical mode, external Emergency Linkage mode and comprehensive Emergency Linkage Mode of power enterprise based on Fishbone Diagram and Process Management Technology.

  6. The Method of Selecting tagSNPs Based on Linkage Disequilibrium%一种基于连锁不平衡的tagSNPs选择算法

    Institute of Scientific and Technical Information of China (English)

    王钧峰; 王新赠

    2016-01-01

    进行全基因组关联研究(genome-wide association studies,简记为GWAS)时,我们需要获得一个足够密集的单核苷酸多态性(single-nucleotide polymorphism,简记为SNP)标记集来解释常见疾病遗传风险的一部分.候选基因中SNP的数量是有限的,但是直接分析所有现存的SNPs是无效的,因为在这些位点上的基因型有很强的关联性,会导致大量的冗余信息,并且会造成基因分型成本的增加,消耗大量的时间.所以我们在进行关联检验时,没有必要对所有的SNPs进行基因分型,只需要选择出具有代表性,并且数量很少的SNPs进行分型,并对这些SNPs进行关联检验.这里选择出的SNPs称为标签SNPs(记为tagSNPs),它为SNPs的一个小的子集,在每个单体型区域中足以捕获单体型的信息.选择tagSNPs的方法有很多,本文我们提出了一种新的tagSNPs选择方法,通过使用基于连锁不平衡(linkage disequilibrium,简记为LD)的两两r2准则对单体型分组,分成不相交的组,并在每个组中选择标签SNPs.与基于原始SNPs集的检验方法比较,我们的方法产生了更少的tagSNPs,在最大化所选标记提供信息含量的同时,降低了基因分型成本,提高了效率.

  7. STAKEHOLDER LINKAGES FOR SUSTAINABLE LAND ...

    African Journals Online (AJOL)

    Osondu

    stakeholder interactions for SLM in the study areas. Key words: Stakeholders; farmer-expert linkages; resource management; Ethiopia ... management practices in many parts of Africa. Farmers .... chosen with consideration of distance to the.

  8. Recombination patterns reveal information about centromere location on linkage maps

    DEFF Research Database (Denmark)

    Limborg, Morten T.; McKinney, Garrett J.; Seeb, Lisa W.

    2016-01-01

    , approximate centromere placement is possible by phasing the same data used to generate linkage maps. Assuming one obligate crossover per chromosome arm, information about centromere location can be revealed by tracking the accumulated recombination frequency along linkage groups, similar to half....... mykiss) characterized by low and unevenly distributed recombination – a general feature of male meiosis in many species. Further, a high frequency of double crossovers along chromosome arms in barley reduced resolution for locating centromeric regions on most linkage groups. Despite these limitations......, our method should work well for high‐density maps in species with strong recombination interference and will enrich many existing and future mapping resources....

  9. Applications of Parametric and Nonparametric Tests for Event Studies on ISE

    OpenAIRE

    Handan YOLSAL

    2011-01-01

    In this study, we conducted a research as to whether splits in shares on the ISE-ON Index at the Istanbul Stock Exchange have had an impact on returns generated from shares between 2005 and 2011 or not using event study method. This study is based on parametric tests, as well as on nonparametric tests developed as an alternative to them. It has been observed that, when cross-sectional variance adjustment is applied to data set, such null hypothesis as “there is no average abnormal return at d...

  10. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  11. Nonparametric bootstrap analysis with applications to demographic effects in demand functions.

    Science.gov (United States)

    Gozalo, P L

    1997-12-01

    "A new bootstrap proposal, labeled smooth conditional moment (SCM) bootstrap, is introduced for independent but not necessarily identically distributed data, where the classical bootstrap procedure fails.... A good example of the benefits of using nonparametric and bootstrap methods is the area of empirical demand analysis. In particular, we will be concerned with their application to the study of two important topics: what are the most relevant effects of household demographic variables on demand behavior, and to what extent present parametric specifications capture these effects." excerpt

  12. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data

    DEFF Research Database (Denmark)

    Tan, Qihua; Thomassen, Mads; Burton, Mark

    2017-01-01

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering...... the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...... time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health....

  13. Non-parametric trend analysis of water quality data of rivers in Kansas

    Science.gov (United States)

    Yu, Y.-S.; Zou, S.; Whittemore, D.

    1993-01-01

    Surface water quality data for 15 sampling stations in the Arkansas, Verdigris, Neosho, and Walnut river basins inside the state of Kansas were analyzed to detect trends (or lack of trends) in 17 major constituents by using four different non-parametric methods. The results show that concentrations of specific conductance, total dissolved solids, calcium, total hardness, sodium, potassium, alkalinity, sulfate, chloride, total phosphorus, ammonia plus organic nitrogen, and suspended sediment generally have downward trends. Some of the downward trends are related to increases in discharge, while others could be caused by decreases in pollution sources. Homogeneity tests show that both station-wide trends and basinwide trends are non-homogeneous. ?? 1993.

  14. A NONPARAMETRIC PROCEDURE OF THE SAMPLE SIZE DETERMINATION FOR SURVIVAL RATE TEST

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Objective This paper proposes a nonparametric procedure of the sample size determination for survival rate test. Methods Using the classical asymptotic normal procedure yields the required homogenetic effective sample size and using the inverse operation with the prespecified value of the survival function of censoring times yields the required sample size. Results It is matched with the rate test for censored data, does not involve survival distributions, and reduces to its classical counterpart when there is no censoring. The observed power of the test coincides with the prescribed power under usual clinical conditions. Conclusion It can be used for planning survival studies of chronic diseases.

  15. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data

    DEFF Research Database (Denmark)

    Tan, Qihua; Thomassen, Mads; Burton, Mark

    2017-01-01

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering...... the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray...

  16. Nonparametric inference procedures for multistate life table analysis.

    Science.gov (United States)

    Dow, M M

    1985-01-01

    Recent generalizations of the classical single state life table procedures to the multistate case provide the means to analyze simultaneously the mobility and mortality experience of 1 or more cohorts. This paper examines fairly general nonparametric combinatorial matrix procedures, known as quadratic assignment, as an analysis technic of various transitional patterns commonly generated by cohorts over the life cycle course. To some degree, the output from a multistate life table analysis suggests inference procedures. In his discussion of multstate life table construction features, the author focuses on the matrix formulation of the problem. He then presents several examples of the proposed nonparametric procedures. Data for the mobility and life expectancies at birth matrices come from the 458 member Cayo Santiago rhesus monkey colony. The author's matrix combinatorial approach to hypotheses testing may prove to be a useful inferential strategy in several multidimensional demographic areas.

  17. Nonparametric instrumental regression with non-convex constraints

    Science.gov (United States)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  18. Combined parametric-nonparametric identification of block-oriented systems

    CERN Document Server

    Mzyk, Grzegorz

    2014-01-01

    This book considers a problem of block-oriented nonlinear dynamic system identification in the presence of random disturbances. This class of systems includes various interconnections of linear dynamic blocks and static nonlinear elements, e.g., Hammerstein system, Wiener system, Wiener-Hammerstein ("sandwich") system and additive NARMAX systems with feedback. Interconnecting signals are not accessible for measurement. The combined parametric-nonparametric algorithms, proposed in the book, can be selected dependently on the prior knowledge of the system and signals. Most of them are based on the decomposition of the complex system identification task into simpler local sub-problems by using non-parametric (kernel or orthogonal) regression estimation. In the parametric stage, the generalized least squares or the instrumental variables technique is commonly applied to cope with correlated excitations. Limit properties of the algorithms have been shown analytically and illustrated in simple experiments.

  19. 基于非参数化与有限元的主轴刀柄结合面非线性参数识别%Nonlinear Parameter Identification of Spindle Holder Interface by Nonparametric Identification Technique and Finite Element Method

    Institute of Scientific and Technical Information of China (English)

    张正旺; 李爱平; 刘雪梅; 谢楠

    2013-01-01

    视主轴-刀柄结合面的双面接触部为一个可以用系统状态变量描述的非线性单元,基于非参数化方法结合有限元分析技术对主轴-刀柄结合面的非线性动态特征参数进行了识别.在HyperMesh中建立主轴-刀柄系统的有限元分析模型,利用Radioss求解器对该模型进行非线性瞬态响应分析,得到结合面处位移、速度、加速度的瞬态响应,再根据牛顿第二运动定律推导得出结合面处非线性接触力的时间历程数据,基于最小二乘法采用切比雪夫多项式对样本数据进行回归分析,得到主轴-刀柄结合面之间非线性接触力的解析表达模型,并验证了该模型的正确性.%Based on nonparametric identification technique and finite element method,a nonlinear dynamic parameter identification method for spindle-holder interface is presented,in which the double contact parts between spindle and holder is considered as a nonlinear element descripting by system state variables.A finite element model of spindle holder system is constructed by HyperMesh,and its nonlinear transient response analysis is conducted using Radioss solver.With Newton's second law of motion,the time history data of nonlinear contact force of the interface are then deduced.Least square estimates with the Chebyshev polynomial basis are employed to approximate the sample data,and an analytic model of the nonlinear contact force of the interface is obtained.

  20. Parametric and Non-Parametric System Modelling

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg

    1999-01-01

    other aspects, the properties of a method for parameter estimation in stochastic differential equations is considered within the field of heat dynamics of buildings. In the second paper a lack-of-fit test for stochastic differential equations is presented. The test can be applied to both linear and non-linear...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...... stochastic differential equations. Some applications are presented in the papers. In the summary report references are made to a number of other applications. Resumé på dansk: Nærværende afhandling består af ti artikler publiceret i perioden 1996-1999 samt et sammendrag og en perspektivering heraf. I...

  1. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.

    2009-05-20

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.

  2. Poverty and life cycle effects: A nonparametric analysis for Germany

    OpenAIRE

    Stich, Andreas

    1996-01-01

    Most empirical studies on poverty consider the extent of poverty either for the entire society or for separate groups like elderly people.However, these papers do not show what the situation looks like for persons of a certain age. In this paper poverty measures depending on age are derived using the joint density of income and age. The density is nonparametrically estimated by weighted Gaussian kernel density estimation. Applying the conditional density of income to several poverty measures ...

  3. Nonparametric test for detecting change in distribution with panel data

    CERN Document Server

    Pommeret, Denys; Ghattas, Badih

    2011-01-01

    This paper considers the problem of comparing two processes with panel data. A nonparametric test is proposed for detecting a monotone change in the link between the two process distributions. The test statistic is of CUSUM type, based on the empirical distribution functions. The asymptotic distribution of the proposed statistic is derived and its finite sample property is examined by bootstrap procedures through Monte Carlo simulations.

  4. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    Science.gov (United States)

    2015-06-10

    estimation exploiting, in concert, hard and soft information. Although our development, theoretical and numerical, makes no distinction based on sample...Fusion of Hard and Soft Information in Nonparametric Density Estimation∗ Johannes O. Royset Roger J-B Wets Department of Operations Research...univariate density estimation in situations when the sample ( hard information) is supplemented by “soft” information about the random phenomenon. These

  5. Gaussian process-based Bayesian nonparametric inference of population size trajectories from gene genealogies.

    Science.gov (United States)

    Palacios, Julia A; Minin, Vladimir N

    2013-03-01

    Changes in population size influence genetic diversity of the population and, as a result, leave a signature of these changes in individual genomes in the population. We are interested in the inverse problem of reconstructing past population dynamics from genomic data. We start with a standard framework based on the coalescent, a stochastic process that generates genealogies connecting randomly sampled individuals from the population of interest. These genealogies serve as a glue between the population demographic history and genomic sequences. It turns out that only the times of genealogical lineage coalescences contain information about population size dynamics. Viewing these coalescent times as a point process, estimating population size trajectories is equivalent to estimating a conditional intensity of this point process. Therefore, our inverse problem is similar to estimating an inhomogeneous Poisson process intensity function. We demonstrate how recent advances in Gaussian process-based nonparametric inference for Poisson processes can be extended to Bayesian nonparametric estimation of population size dynamics under the coalescent. We compare our Gaussian process (GP) approach to one of the state-of-the-art Gaussian Markov random field (GMRF) methods for estimating population trajectories. Using simulated data, we demonstrate that our method has better accuracy and precision. Next, we analyze two genealogies reconstructed from real sequences of hepatitis C and human Influenza A viruses. In both cases, we recover more believed aspects of the viral demographic histories than the GMRF approach. We also find that our GP method produces more reasonable uncertainty estimates than the GMRF method.

  6. A non-parametric approach to estimate the total deviation index for non-normal data.

    Science.gov (United States)

    Perez-Jaume, Sara; Carrasco, Josep L

    2015-11-10

    Concordance indices are used to assess the degree of agreement between different methods that measure the same characteristic. In this context, the total deviation index (TDI) is an unscaled concordance measure that quantifies to which extent the readings from the same subject obtained by different methods may differ with a certain probability. Common approaches to estimate the TDI assume data are normally distributed and linearity between response and effects (subjects, methods and random error). Here, we introduce a new non-parametric methodology for estimation and inference of the TDI that can deal with any kind of quantitative data. The present study introduces this non-parametric approach and compares it with the already established methods in two real case examples that represent situations of non-normal data (more specifically, skewed data and count data). The performance of the already established methodologies and our approach in these contexts is assessed by means of a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    Science.gov (United States)

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J

    2010-01-01

    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  8. International Environmental Problems, Issue Linkage and the European Union

    OpenAIRE

    Kroeze-Gil, J.

    2003-01-01

    This thesis explores the circumstances under which issue linkage can be applied to achieve cooperation on international environmental problems in general and on environmental problems in the European Union in particular. A major topic in this thesis is the development and analysis of cooperative and non-cooperative game theoretical methods of issue linkage that include imperfectly reversed interests. The relevance of the models is shown in the context of EU environmental decision making.

  9. Multipoint linkage detection in the presence of heterogeneity.

    Science.gov (United States)

    Chiu, Yen-Feng; Liang, Kung-Yee; Beaty, Terri H

    2002-06-01

    Linkage heterogeneity is common for complex diseases. It is well known that loss of statistical power for detecting linkage will result if one assumes complete homogeneity in the presence of linkage heterogeneity. To this end, Smith (1963, Annals of Human Genetics 27, 175-182) proposed an admixture model to account for linkage heterogeneity. It is well known that for this model, the conventional chi-squared approximation to the likelihood ratio test for no linkage does not apply even when the sample size is large. By dealing with nuclear families and one marker at a time for genetic diseases with simple modes of inheritance, score-based test statistics (Liang and Rathouz, 1999, Biometrics 55, 65-74) and likelihood-ratio-based test statistics (Lemdani and Pons, 1995, Biometrics 51, 1033-1041) have been proposed which have a simple large-sample distribution under the null hypothesis of linkage. In this paper, we extend their work to more practical situations that include information from multiple markers and multi-generational pedigrees while allowing for a class of general genetic models. Three different approaches are proposed to eliminate the nuisance parameters in these test statistics. We show that all three approaches lead to the same asymptotic distribution under the null hypothesis of no linkage. Simulation results show that the proposed test statistics have adequate power to detect linkage and that the performances of these two classes of test statistics are quite comparable. We have applied the proposed method to a family study of asthma (Barnes et al., 1996), in which the score-based test shows evidence of linkage with p-value <0.0001 in the region of interest on chromosome 12. Additionally, we have implemented this score-based test within the frequently used computer package GENEHUNTER.

  10. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  11. Nonparametric reconstruction of the cosmic expansion with local regression smoothing and simulation extrapolation

    CERN Document Server

    Montiel, Ariadna; Sendra, Irene; Escamilla-Rivera, Celia; Salzano, Vincenzo

    2014-01-01

    In this work we present a nonparametric approach, which works on minimal assumptions, to reconstruct the cosmic expansion of the Universe. We propose to combine a locally weighted scatterplot smoothing method and a simulation-extrapolation method. The first one (Loess) is a nonparametric approach that allows to obtain smoothed curves with no prior knowledge of the functional relationship between variables nor of the cosmological quantities. The second one (Simex) takes into account the effect of measurement errors on a variable via a simulation process. For the reconstructions we use as raw data the Union2.1 Type Ia Supernovae compilation, as well as recent Hubble parameter measurements. This work aims to illustrate the approach, which turns out to be a self-sufficient technique in the sense we do not have to choose anything by hand. We examine the details of the method, among them the amount of observational data needed to perform the locally weighted fit which will define the robustness of our reconstructio...

  12. Subnotificação da comorbidade tuberculose e aids: uma aplicação do método de linkage Subnotificación de la comorbilidad tuberculosis y sida: una aplicación del método de linkage Underreporting of the tuberculosis and AIDS comorbidity: an application of the linkage method

    Directory of Open Access Journals (Sweden)

    Carolina Novaes Carvalho

    2011-06-01

    con la información de presencia de sida. RESULTADOS: La subnotificación de TB-sida fue de 17,7%. Este porcentaje varió entre estados. La incorporación de los registros subnotificados a los previamente reconocidos elevó la proporción de TB-sida en Brasil de 6,9% a 8,4%. Las mayores proporciones de subnotificación fueron observadas en Acre, Alagoas, Maranhao y Piauí (más de 35% en cada uno y las menores en Sao Paulo y Goiás (cerca de 10% en cada uno. CONCLUSIONES: La subnotificación de la comorbilidad TB-sida encontrada en Brasil debe deflagrar modificaciones en el sistema de vigilancia para proveer informaciones a los programas nacionales.OBJECTIVE: To analyze the underreporting of the tuberculosis (TB and AIDS comorbidity. METHODS: Surveillance study using records from the Notifiable Diseases Information System - Tuberculosis and AIDS in Brazil from 2000 to 2005. Records of TB without information on the presence of Aids were considered to be underreporting of the comorbidity when paired off with AIDS records in which the year of diagnosis of AIDS was the same or previous to the year of reporting of TB, as well as records from the same patient whose previous records had this information. An indicator was created: recognized TB-AIDS comorbidity, based on the TB records that had information on the presence of AIDS. RESULTS: The underreporting of TB-AIDS was 17.7%. This percentage varied between states. The incorporation of the underreported records into the previously recognized ones increased the proportion of TB-AIDS in Brazil from 6.9% to 8.4%. The highest proportions of underreporting were noted in Acre (Northern, Alagoas, Maranhão and Piauí (Northeastern (more than 35% each and the lowest in São Paulo (Southeastern and Goiás (Central-western (around 10% each. CONCLUSIONS: The underreporting of the TB-AIDS comorbidity found in Brazil will probably trigger modifications in the surveillance system in order to provide information for the national programs.

  13. Data Linkage: A powerful research tool with potential problems

    Directory of Open Access Journals (Sweden)

    Scott Ian

    2010-12-01

    Full Text Available Abstract Background Policy makers, clinicians and researchers are demonstrating increasing interest in using data linked from multiple sources to support measurement of clinical performance and patient health outcomes. However, the utility of data linkage may be compromised by sub-optimal or incomplete linkage, leading to systematic bias. In this study, we synthesize the evidence identifying participant or population characteristics that can influence the validity and completeness of data linkage and may be associated with systematic bias in reported outcomes. Methods A narrative review, using structured search methods was undertaken. Key words "data linkage" and Mesh term "medical record linkage" were applied to Medline, EMBASE and CINAHL databases between 1991 and 2007. Abstract inclusion criteria were; the article attempted an empirical evaluation of methodological issues relating to data linkage and reported on patient characteristics, the study design included analysis of matched versus unmatched records, and the report was in English. Included articles were grouped thematically according to patient characteristics that were compared between matched and unmatched records. Results The search identified 1810 articles of which 33 (1.8% met inclusion criteria. There was marked heterogeneity in study methods and factors investigated. Characteristics that were unevenly distributed among matched and unmatched records were; age (72% of studies, sex (50% of studies, race (64% of studies, geographical/hospital site (93% of studies, socio-economic status (82% of studies and health status (72% of studies. Conclusion A number of relevant patient or population factors may be associated with incomplete data linkage resulting in systematic bias in reported clinical outcomes. Readers should consider these factors in interpreting the reported results of data linkage studies.

  14. Genome-wide linkage and copy number variation analysis reveals 710 kb duplication on chromosome 1p31.3 responsible for autosomal dominant omphalocele

    Science.gov (United States)

    Radhakrishna, Uppala; Nath, Swapan K; McElreavey, Ken; Ratnamala, Uppala; Sun, Celi; Maiti, Amit K; Gagnebin, Maryline; Béna, Frédérique; Newkirk, Heather L; Sharp, Andrew J; Everman, David B; Murray, Jeffrey C; Schwartz, Charles E; Antonarakis, Stylianos E; Butler, Merlin G

    2017-01-01

    Background Omphalocele is a congenital birth defect characterised by the presence of internal organs located outside of the ventral abdominal wall. The purpose of this study was to identify the underlying genetic mechanisms of a large autosomal dominant Caucasian family with omphalocele. Methods and findings A genetic linkage study was conducted in a large family with an autosomal dominant transmission of an omphalocele using a genome-wide single nucleotide polymorphism (SNP) array. The analysis revealed significant evidence of linkage (non-parametric NPL = 6.93, p=0.0001; parametric logarithm of odds (LOD) = 2.70 under a fully penetrant dominant model) at chromosome band 1p31.3. Haplotype analysis narrowed the locus to a 2.74 Mb region between markers rs2886770 (63014807 bp) and rs1343981 (65757349 bp). Molecular characterisation of this interval using array comparative genomic hybridisation followed by quantitative microsphere hybridisation analysis revealed a 710 kb duplication located at 63.5–64.2 Mb. All affected individuals who had an omphalocele and shared the haplotype were positive for this duplicated region, while the duplication was absent from all normal individuals of this family. Multipoint linkage analysis using the duplication as a marker yielded a maximum LOD score of 3.2 at 1p31.3 under a dominant model. The 710 kb duplication at 1p31.3 band contains seven known genes including FOXD3, ALG6, ITGB3BP, KIAA1799, DLEU2L, PGM1, and the proximal portion of ROR1. Importantly, this duplication is absent from the database of genomic variants. Conclusions The present study suggests that development of an omphalocele in this family is controlled by overexpression of one or more genes in the duplicated region. To the authors’ knowledge, this is the first reported association of an inherited omphalocele condition with a chromosomal rearrangement. PMID:22499347

  15. LICORS: Light Cone Reconstruction of States for Non-parametric Forecasting of Spatio-Temporal Systems

    CERN Document Server

    Goerg, Georg M

    2012-01-01

    We present a new, non-parametric forecasting method for data where continuous values are observed discretely in space and time. Our method, "light-cone reconstruction of states" (LICORS), uses physical principles to identify predictive states which are local properties of the system, both in space and time. LICORS discovers the number of predictive states and their predictive distributions automatically, and consistently, under mild assumptions on the data source. We provide an algorithm to implement our method, along with a cross-validation scheme to pick control settings. Simulations show that CV-tuned LICORS outperforms standard methods in forecasting challenging spatio-temporal dynamics. Our work provides applied researchers with a new, highly automatic method to analyze and forecast spatio-temporal data.

  16. Comparison between scaling law and nonparametric Bayesian estimate for the recurrence time of strong earthquakes

    Science.gov (United States)

    Rotondi, R.

    2009-04-01

    According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89

  17. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  18. Bayesian nonparametric regression analysis of data with random effects covariates from longitudinal measurements.

    Science.gov (United States)

    Ryu, Duchwan; Li, Erning; Mallick, Bani K

    2011-06-01

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves.

  19. Non-parametric kernel density estimation of species sensitivity distributions in developing water quality criteria of metals.

    Science.gov (United States)

    Wang, Ying; Wu, Fengchang; Giesy, John P; Feng, Chenglian; Liu, Yuedan; Qin, Ning; Zhao, Yujie

    2015-09-01

    Due to use of different parametric models for establishing species sensitivity distributions (SSDs), comparison of water quality criteria (WQC) for metals of the same group or period in the periodic table is uncertain and results can be biased. To address this inadequacy, a new probabilistic model, based on non-parametric kernel density estimation was developed and optimal bandwidths and testing methods are proposed. Zinc (Zn), cadmium (Cd), and mercury (Hg) of group IIB of the periodic table are widespread in aquatic environments, mostly at small concentrations, but can exert detrimental effects on aquatic life and human health. With these metals as target compounds, the non-parametric kernel density estimation method and several conventional parametric density estimation methods were used to derive acute WQC of metals for protection of aquatic species in China that were compared and contrasted with WQC for other jurisdictions. HC5 values for protection of different types of species were derived for three metals by use of non-parametric kernel density estimation. The newly developed probabilistic model was superior to conventional parametric density estimations for constructing SSDs and for deriving WQC for these metals. HC5 values for the three metals were inversely proportional to atomic number, which means that the heavier atoms were more potent toxicants. The proposed method provides a novel alternative approach for developing SSDs that could have wide application prospects in deriving WQC and use in assessment of risks to ecosystems.

  20. Transit Timing Observations From Kepler: Ii. Confirmation of Two Multiplanet Systems via a Non-Parametric Correlation Analysis

    OpenAIRE

    Ford, Eric B.; Fabrycky, Daniel C.; Steffen, Jason H.; Carter, Joshua A.; Fressin, Francois; Holman, Matthew Jon; Lissauer, Jack J.; Moorhead, Althea V.; Morehead, Robert C.; Ragozzine, Darin; Rowe, Jason F.; Welsh, William F.; Allen, Christopher; Batalha, Natalie M.; Borucki, William J.

    2012-01-01

    We present a new method for confirming transiting planets based on the combination of transit timingn variations (TTVs) and dynamical stability. Correlated TTVs provide evidence that the pair of bodies are in the same physical system. Orbital stability provides upper limits for the masses of the transiting companions that are in the planetary regime. This paper describes a non-parametric technique for quantifying the statistical significance of TTVs based on the correlation of two TTV data se...

  1. Striking trends in the incidence of health problems in the Netherlands (2002-05): findings from a new method for record linkage in general practice.

    NARCIS (Netherlands)

    Biermans, M.C.J.; Spreeuwenberg, P.; Verheij, R.A.; Bakker, D.H. de; Vries Robbé, P.F. de; Zielhuis, G.A.

    2009-01-01

    BACKGROUND: This study aimed to detect striking trends based on a new strategy for monitoring public health. METHODS: We used data over 4 years from electronic medical records of a large, nationally representative network of general practices. Episodes were either directly recorded by general practi

  2. Developing two non-parametric performance models for higher learning institutions

    Science.gov (United States)

    Kasim, Maznah Mat; Kashim, Rosmaini; Rahim, Rahela Abdul; Khan, Sahubar Ali Muhamed Nadhar

    2016-08-01

    Measuring the performance of higher learning Institutions (HLIs) is a must for these institutions to improve their excellence. This paper focuses on formation of two performance models: efficiency and effectiveness models by utilizing a non-parametric method, Data Envelopment Analysis (DEA). The proposed models are validated by measuring the performance of 16 public universities in Malaysia for year 2008. However, since data for one of the variables is unavailable, an estimate was used as a proxy to represent the real data. The results show that average efficiency and effectiveness scores were 0.817 and 0.900 respectively, while six universities were fully efficient and eight universities were fully effective. A total of six universities were both efficient and effective. It is suggested that the two proposed performance models would work as complementary methods to the existing performance appraisal method or as alternative methods in monitoring the performance of HLIs especially in Malaysia.

  3. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    Science.gov (United States)

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  4. Inference of Gene Regulatory Networks Using Bayesian Nonparametric Regression and Topology Information

    Science.gov (United States)

    2017-01-01

    Gene regulatory networks (GRNs) play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result. PMID:28133490

  5. Inference of Gene Regulatory Networks Using Bayesian Nonparametric Regression and Topology Information

    Directory of Open Access Journals (Sweden)

    Yue Fan

    2017-01-01

    Full Text Available Gene regulatory networks (GRNs play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result.

  6. Linkage to chromosome 2q32.2-q33.3 in familial serrated neoplasia (Jass syndrome).

    Science.gov (United States)

    Roberts, Aedan; Nancarrow, Derek; Clendenning, Mark; Buchanan, Daniel D; Jenkins, Mark A; Duggan, David; Taverna, Darin; McKeone, Diane; Walters, Rhiannon; Walsh, Michael D; Young, Bruce W; Jass, Jeremy R; Rosty, Christophe; Gattas, Michael; Pelzer, Elise; Hopper, John L; Goldblatt, Jack; George, Jill; Suthers, Graeme K; Phillips, Kerry; Parry, Susan; Woodall, Sonja; Arnold, Julie; Tucker, Kathy; Muir, Amanda; Drini, Musa; Macrae, Finlay; Newcomb, Polly; Potter, John D; Pavluk, Erika; Lindblom, Annika; Young, Joanne P

    2011-06-01

    Causative genetic variants have to date been identified for only a small proportion of familial colorectal cancer (CRC). While conditions such as Familial Adenomatous Polyposis and Lynch syndrome have well defined genetic causes, the search for variants underlying the remainder of familial CRC is plagued by genetic heterogeneity. The recent identification of families with a heritable predisposition to malignancies arising through the serrated pathway (familial serrated neoplasia or Jass syndrome) provides an opportunity to study a subset of familial CRC in which heterogeneity may be greatly reduced. A genome-wide linkage screen was performed on a large family displaying a dominantly-inherited predisposition to serrated neoplasia genotyped using the Affymetrix GeneChip Human Mapping 10 K SNP Array. Parametric and nonparametric analyses were performed and resulting regions of interest, as well as previously reported CRC susceptibility loci at 3q22, 7q31 and 9q22, were followed up by finemapping in 10 serrated neoplasia families. Genome-wide linkage analysis revealed regions of interest at 2p25.2-p25.1, 2q24.3-q37.1 and 8p21.2-q12.1. Finemapping linkage and haplotype analyses identified 2q32.2-q33.3 as the region most likely to harbour linkage, with heterogeneity logarithm of the odds (HLOD) 2.09 and nonparametric linkage (NPL) score 2.36 (P = 0.004). Five primary candidate genes (CFLAR, CASP10, CASP8, FZD7 and BMPR2) were sequenced and no segregating variants identified. There was no evidence of linkage to previously reported loci on chromosomes 3, 7 and 9.

  7. a Multivariate Downscaling Model for Nonparametric Simulation of Daily Flows

    Science.gov (United States)

    Molina, J. M.; Ramirez, J. A.; Raff, D. A.

    2011-12-01

    A multivariate, stochastic nonparametric framework for stepwise disaggregation of seasonal runoff volumes to daily streamflow is presented. The downscaling process is conditional on volumes of spring runoff and large-scale ocean-atmosphere teleconnections and includes a two-level cascade scheme: seasonal-to-monthly disaggregation first followed by monthly-to-daily disaggregation. The non-parametric and assumption-free character of the framework allows consideration of the random nature and nonlinearities of daily flows, which parametric models are unable to account for adequately. This paper examines statistical links between decadal/interannual climatic variations in the Pacific Ocean and hydrologic variability in US northwest region, and includes a periodicity analysis of climate patterns to detect coherences of their cyclic behavior in the frequency domain. We explore the use of such relationships and selected signals (e.g., north Pacific gyre oscillation, southern oscillation, and Pacific decadal oscillation indices, NPGO, SOI and PDO, respectively) in the proposed data-driven framework by means of a combinatorial approach with the aim of simulating improved streamflow sequences when compared with disaggregated series generated from flows alone. A nearest neighbor time series bootstrapping approach is integrated with principal component analysis to resample from the empirical multivariate distribution. A volume-dependent scaling transformation is implemented to guarantee the summability condition. In addition, we present a new and simple algorithm, based on nonparametric resampling, that overcomes the common limitation of lack of preservation of historical correlation between daily flows across months. The downscaling framework presented here is parsimonious in parameters and model assumptions, does not generate negative values, and produces synthetic series that are statistically indistinguishable from the observations. We present evidence showing that both

  8. Linkage disequilibrium in wild mice.

    Directory of Open Access Journals (Sweden)

    Cathy C Laurie

    2007-08-01

    Full Text Available Crosses between laboratory strains of mice provide a powerful way of detecting quantitative trait loci for complex traits related to human disease. Hundreds of these loci have been detected, but only a small number of the underlying causative genes have been identified. The main difficulty is the extensive linkage disequilibrium (LD in intercross progeny and the slow process of fine-scale mapping by traditional methods. Recently, new approaches have been introduced, such as association studies with inbred lines and multigenerational crosses. These approaches are very useful for interval reduction, but generally do not provide single-gene resolution because of strong LD extending over one to several megabases. Here, we investigate the genetic structure of a natural population of mice in Arizona to determine its suitability for fine-scale LD mapping and association studies. There are three main findings: (1 Arizona mice have a high level of genetic variation, which includes a large fraction of the sequence variation present in classical strains of laboratory mice; (2 they show clear evidence of local inbreeding but appear to lack stable population structure across the study area; and (3 LD decays with distance at a rate similar to human populations, which is considerably more rapid than in laboratory populations of mice. Strong associations in Arizona mice are limited primarily to markers less than 100 kb apart, which provides the possibility of fine-scale association mapping at the level of one or a few genes. Although other considerations, such as sample size requirements and marker discovery, are serious issues in the implementation of association studies, the genetic variation and LD results indicate that wild mice could provide a useful tool for identifying genes that cause variation in complex traits.

  9. Panel data nonparametric estimation of production risk and risk preferences

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We apply nonparametric panel data kernel regression to investigate production risk, out-put price uncertainty, and risk attitudes of Polish dairy farms based on a firm-level unbalanced panel data set that covers the period 2004–2010. We compare different model specifications and different...... approaches for obtaining firm-specific measures of risk attitudes. We found that Polish dairy farmers are risk averse regarding production risk and price uncertainty. According to our results, Polish dairy farmers perceive the production risk as being more significant than the risk related to output price...

  10. Nonparametric statistics a step-by-step approach

    CERN Document Server

    Corder, Gregory W

    2014-01-01

    "…a very useful resource for courses in nonparametric statistics in which the emphasis is on applications rather than on theory.  It also deserves a place in libraries of all institutions where introductory statistics courses are taught."" -CHOICE This Second Edition presents a practical and understandable approach that enhances and expands the statistical toolset for readers. This book includes: New coverage of the sign test and the Kolmogorov-Smirnov two-sample test in an effort to offer a logical and natural progression to statistical powerSPSS® (Version 21) software and updated screen ca

  11. Categorical and nonparametric data analysis choosing the best statistical technique

    CERN Document Server

    Nussbaum, E Michael

    2014-01-01

    Featuring in-depth coverage of categorical and nonparametric statistics, this book provides a conceptual framework for choosing the most appropriate type of test in various research scenarios. Class tested at the University of Nevada, the book's clear explanations of the underlying assumptions, computer simulations, and Exploring the Concept boxes help reduce reader anxiety. Problems inspired by actual studies provide meaningful illustrations of the techniques. The underlying assumptions of each test and the factors that impact validity and statistical power are reviewed so readers can explain

  12. Nonparametric statistical structuring of knowledge systems using binary feature matches

    DEFF Research Database (Denmark)

    Mørup, Morten; Glückstad, Fumiko Kano; Herlau, Tue

    2014-01-01

    statistical support and how this approach generalizes to the structuring and alignment of knowledge systems. We propose a non-parametric Bayesian generative model for structuring binary feature data that does not depend on a specific choice of similarity measure. We jointly model all combinations of binary......Structuring knowledge systems with binary features is often based on imposing a similarity measure and clustering objects according to this similarity. Unfortunately, such analyses can be heavily influenced by the choice of similarity measure. Furthermore, it is unclear at which level clusters have...

  13. Testing for a constant coefficient of variation in nonparametric regression

    OpenAIRE

    Dette, Holger; Marchlewski, Mareen; Wagener, Jens

    2010-01-01

    In the common nonparametric regression model Y_i=m(X_i)+sigma(X_i)epsilon_i we consider the problem of testing the hypothesis that the coefficient of the scale and location function is constant. The test is based on a comparison of the observations Y_i=\\hat{sigma}(X_i) with their mean by a smoothed empirical process, where \\hat{sigma} denotes the local linear estimate of the scale function. We show weak convergence of a centered version of this process to a Gaussian process under the null ...

  14. Distributed Nonparametric and Semiparametric Regression on SPARK for Big Data Forecasting

    Directory of Open Access Journals (Sweden)

    Jelena Fiosina

    2017-01-01

    Full Text Available Forecasting in big datasets is a common but complicated task, which cannot be executed using the well-known parametric linear regression. However, nonparametric and semiparametric methods, which enable forecasting by building nonlinear data models, are computationally intensive and lack sufficient scalability to cope with big datasets to extract successful results in a reasonable time. We present distributed parallel versions of some nonparametric and semiparametric regression models. We used MapReduce paradigm and describe the algorithms in terms of SPARK data structures to parallelize the calculations. The forecasting accuracy of the proposed algorithms is compared with the linear regression model, which is the only forecasting model currently having parallel distributed realization within the SPARK framework to address big data problems. The advantages of the parallelization of the algorithm are also provided. We validate our models conducting various numerical experiments: evaluating the goodness of fit, analyzing how increasing dataset size influences time consumption, and analyzing time consumption by varying the degree of parallelism (number of workers in the distributed realization.

  15. Nonparametric Identification of Glucose-Insulin Process in IDDM Patient with Multi-meal Disturbance

    Science.gov (United States)

    Bhattacharjee, A.; Sutradhar, A.

    2012-12-01

    Modern close loop control for blood glucose level in a diabetic patient necessarily uses an explicit model of the process. A fixed parameter full order or reduced order model does not characterize the inter-patient and intra-patient parameter variability. This paper deals with a frequency domain nonparametric identification of the nonlinear glucose-insulin process in an insulin dependent diabetes mellitus patient that captures the process dynamics in presence of uncertainties and parameter variations. An online frequency domain kernel estimation method has been proposed that uses the input-output data from the 19th order first principle model of the patient in intravenous route. Volterra equations up to second order kernels with extended input vector for a Hammerstein model are solved online by adaptive recursive least square (ARLS) algorithm. The frequency domain kernels are estimated using the harmonic excitation input data sequence from the virtual patient model. A short filter memory length of M = 2 was found sufficient to yield acceptable accuracy with lesser computation time. The nonparametric models are useful for closed loop control, where the frequency domain kernels can be directly used as the transfer function. The validation results show good fit both in frequency and time domain responses with nominal patient as well as with parameter variations.

  16. Non-parametric transformation for data correlation and integration: From theory to practice

    Energy Technology Data Exchange (ETDEWEB)

    Datta-Gupta, A.; Xue, Guoping; Lee, Sang Heon [Texas A& M Univ., College Station, TX (United States)

    1997-08-01

    The purpose of this paper is two-fold. First, we introduce the use of non-parametric transformations for correlating petrophysical data during reservoir characterization. Such transformations are completely data driven and do not require a priori functional relationship between response and predictor variables which is the case with traditional multiple regression. The transformations are very general, computationally efficient and can easily handle mixed data types for example, continuous variables such as porosity, permeability and categorical variables such as rock type, lithofacies. The power of the non-parametric transformation techniques for data correlation has been illustrated through synthetic and field examples. Second, we utilize these transformations to propose a two-stage approach for data integration during heterogeneity characterization. The principal advantages of our approach over traditional cokriging or cosimulation methods are: (1) it does not require a linear relationship between primary and secondary data, (2) it exploits the secondary information to its fullest potential by maximizing the correlation between the primary and secondary data, (3) it can be easily applied to cases where several types of secondary or soft data are involved, and (4) it significantly reduces variance function calculations and thus, greatly facilitates non-Gaussian cosimulation. We demonstrate the data integration procedure using synthetic and field examples. The field example involves estimation of pore-footage distribution using well data and multiple seismic attributes.

  17. Bayesian Nonparametric Measurement of Factor Betas and Clustering with Application to Hedge Fund Returns

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2016-03-01

    Full Text Available We define a dynamic and self-adjusting mixture of Gaussian Graphical Models to cluster financial returns, and provide a new method for extraction of nonparametric estimates of dynamic alphas (excess return and betas (to a choice set of explanatory factors in a multivariate setting. This approach, as well as the outputs, has a dynamic, nonstationary and nonparametric form, which circumvents the problem of model risk and parametric assumptions that the Kalman filter and other widely used approaches rely on. The by-product of clusters, used for shrinkage and information borrowing, can be of use to determine relationships around specific events. This approach exhibits a smaller Root Mean Squared Error than traditionally used benchmarks in financial settings, which we illustrate through simulation. As an illustration, we use hedge fund index data, and find that our estimated alphas are, on average, 0.13% per month higher (1.6% per year than alphas estimated through Ordinary Least Squares. The approach exhibits fast adaptation to abrupt changes in the parameters, as seen in our estimated alphas and betas, which exhibit high volatility, especially in periods which can be identified as times of stressful market events, a reflection of the dynamic positioning of hedge fund portfolio managers.

  18. A non-parametric Bayesian approach for clustering and tracking non-stationarities of neural spikes.

    Science.gov (United States)

    Shalchyan, Vahid; Farina, Dario

    2014-02-15

    Neural spikes from multiple neurons recorded in a multi-unit signal are usually separated by clustering. Drifts in the position of the recording electrode relative to the neurons over time cause gradual changes in the position and shapes of the clusters, challenging the clustering task. By dividing the data into short time intervals, Bayesian tracking of the clusters based on Gaussian cluster model has been previously proposed. However, the Gaussian cluster model is often not verified for neural spikes. We present a Bayesian clustering approach that makes no assumptions on the distribution of the clusters and use kernel-based density estimation of the clusters in every time interval as a prior for Bayesian classification of the data in the subsequent time interval. The proposed method was tested and compared to Gaussian model-based approach for cluster tracking by using both simulated and experimental datasets. The results showed that the proposed non-parametric kernel-based density estimation of the clusters outperformed the sequential Gaussian model fitting in both simulated and experimental data tests. Using non-parametric kernel density-based clustering that makes no assumptions on the distribution of the clusters enhances the ability of tracking cluster non-stationarity over time with respect to the Gaussian cluster modeling approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Genome-wide linkage scan identifies two novel genetic loci for coronary artery disease: in GeneQuest families.

    Science.gov (United States)

    Gao, Hanxiang; Li, Lin; Rao, Shaoqi; Shen, Gongqing; Xi, Quansheng; Chen, Shenghan; Zhang, Zheng; Wang, Kai; Ellis, Stephen G; Chen, Qiuyun; Topol, Eric J; Wang, Qing K

    2014-01-01

    Coronary artery disease (CAD) is the leading cause of death worldwide. Recent genome-wide association studies (GWAS) identified >50 common variants associated with CAD or its complication myocardial infarction (MI), but collectively they account for missing heritability". Rare variants with large effects may account for a large portion of missing heritability. Genome-wide linkage studies of large families and follow-up fine mapping and deep sequencing are particularly effective in identifying rare variants with large effects. Here we show results from a genome-wide linkage scan for CAD in multiplex GeneQuest families with early onset CAD and MI. Whole genome genotyping was carried out with 408 markers that span the human genome by every 10 cM and linkage analyses were performed using the affected relative pair analysis implemented in GENEHUNTER. Affected only nonparametric linkage (NPL) analysis identified two novel CAD loci with highly significant evidence of linkage on chromosome 3p25.1 (peak NPL  = 5.49) and 3q29 (NPL  = 6.84). We also identified four loci with suggestive linkage on 9q22.33, 9q34.11, 17p12, and 21q22.3 (NPL  = 3.18-4.07). These results identify novel loci for CAD and provide a framework for fine mapping and deep sequencing to identify new susceptibility genes and novel variants associated with risk of CAD.

  20. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Science.gov (United States)

    González, Adriana; Delouille, Véronique; Jacques, Laurent

    2016-01-01

    Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting). The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  1. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  2. Dubin's Minimal Linkage Construct Revisited.

    Science.gov (United States)

    Rogers, Donald P.

    This paper contains a theoretical analysis and empirical study that support the major premise of Robert Dubin's minimal-linkage construct-that restricting communication links increases organizational stability. The theoretical analysis shows that fewer communication links are associated with less uncertainty, more redundancy, and greater…

  3. Constructing dense genetic linkage maps

    NARCIS (Netherlands)

    Jansen, J.; Jong, de A.G.; Ooijen, van J.W.

    2001-01-01

    This paper describes a novel combination of techniques for the construction of dense genetic linkage maps. The construction of such maps is hampered by the occurrence of even small proportions of typing errors. Simulated annealing is used to obtain the best map according to the optimality criterion:

  4. North-South Business Linkages

    DEFF Research Database (Denmark)

    Sørensen, Olav Jull; Kuada, John

    2006-01-01

    Based on empirical studies of linkages between TNCs and local firms in India, Malaysia, Vietnam, Ghana and South Africa, five themes are discussed and related to present theoretical perspectives. The themes are (1) Linakge Governance; (2) Globalisation and the dynamics in developing countries (the...

  5. A note on the use of the non-parametric Wilcoxon-Mann-Whitney test in the analysis of medical studies

    Directory of Open Access Journals (Sweden)

    Kühnast, Corinna

    2008-04-01

    Full Text Available Background: Although non-normal data are widespread in biomedical research, parametric tests unnecessarily predominate in statistical analyses. Methods: We surveyed five biomedical journals and – for all studies which contain at least the unpaired t-test or the non-parametric Wilcoxon-Mann-Whitney test – investigated the relationship between the choice of a statistical test and other variables such as type of journal, sample size, randomization, sponsoring etc. Results: The non-parametric Wilcoxon-Mann-Whitney was used in 30% of the studies. In a multivariable logistic regression the type of journal, the test object, the scale of measurement and the statistical software were significant. The non-parametric test was more common in case of non-continuous data, in high-impact journals, in studies in humans, and when the statistical software is specified, in particular when SPSS was used.

  6. The application of non-parametric statistical techniques to an ALARA programme.

    Science.gov (United States)

    Moon, J H; Cho, Y H; Kang, C S

    2001-01-01

    For the cost-effective reduction of occupational radiation dose (ORD) at nuclear power plants, it is necessary to identify what are the processes of repetitive high ORD during maintenance and repair operations. To identify the processes, the point values such as mean and median are generally used, but they sometimes lead to misjudgment since they cannot show other important characteristics such as dose distributions and frequencies of radiation jobs. As an alternative, the non-parametric analysis method is proposed, which effectively identifies the processes of repetitive high ORD. As a case study, the method is applied to ORD data of maintenance and repair processes at Kori Units 3 and 4 that are pressurised water reactors with 950 MWe capacity and have been operating since 1986 and 1987 respectively, in Korea and the method is demonstrated to be an efficient way of analysing the data.

  7. Nonparametric analysis of the time structure of seismicity in a geographic region

    Directory of Open Access Journals (Sweden)

    A. Quintela-del-Río

    2002-06-01

    Full Text Available As an alternative to traditional parametric approaches, we suggest nonparametric methods for analyzing temporal data on earthquake occurrences. In particular, the kernel method for estimating the hazard function and the intensity function are presented. One novelty of our approaches is that we take into account the possible dependence of the data to estimate the distribution of time intervals between earthquakes, which has not been considered in most statistics studies on seismicity. Kernel estimation of hazard function has been used to study the occurrence process of cluster centers (main shocks. Kernel intensity estimation, on the other hand, has helped to describe the occurrence process of cluster members (aftershocks. Similar studies in two geographic areas of Spain (Granada and Galicia have been carried out to illustrate the estimation methods suggested.

  8. Non-parametric probabilistic forecasts of wind power: required properties and evaluation

    DEFF Research Database (Denmark)

    Pinson, Pierre; Nielsen, Henrik Aalborg; Møller, Jan Kloppenborg;

    2007-01-01

    of the conditional expectation of future generation for each look-ahead time, but also with uncertainty estimates given by probabilistic forecasts. In order to avoid assumptions on the shape of predictive distributions, these probabilistic predictions are produced from nonparametric methods, and then take the form...... of a single or a set of quantile forecasts. The required and desirable properties of such probabilistic forecasts are defined and a framework for their evaluation is proposed. This framework is applied for evaluating the quality of two statistical methods producing full predictive distributions from point......Predictions of wind power production for horizons up to 48-72 hour ahead comprise a highly valuable input to the methods for the daily management or trading of wind generation. Today, users of wind power predictions are not only provided with point predictions, which are estimates...

  9. Fine mapping of multiple interacting quantitative trait loci using combined linkage disequilibrium and linkage information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Quantitative trait loci (QTL) and their additive, dominance and epistatic effects play a critical role in complex trait variation. It is often infeasible to detect multiple interacting QTL due to main effects often being confounded by interaction effects.Positioning interacting QTL within a small region is even more difficult. We present a variance component approach nested in an empirical Bayesian method, which simultaneously takes into account additive, dominance and epistatic effects due to multiple interacting QTL. The covariance structure used in the variance component approach is based on combined linkage disequilibrium and linkage (LDL) information. In a simulation study where there are complex epistatic interactions between QTL, it is possible to simultaneously fine map interacting QTL using the proposed approach. The present method combined with LDL information can efficiently detect QTL and their dominance and epistatic effects, making it possible to simultaneously fine map main and epistatic QTL.

  10. oA novel nonparametric approach for estimating cut-offs in continuous risk indicators with application to diabetes epidemiology

    Directory of Open Access Journals (Sweden)

    Ferger Dietmar

    2009-09-01

    Full Text Available Abstract Background Epidemiological and clinical studies, often including anthropometric measures, have established obesity as a major risk factor for the development of type 2 diabetes. Appropriate cut-off values for anthropometric parameters are necessary for prediction or decision purposes. The cut-off corresponding to the Youden-Index is often applied in epidemiology and biomedical literature for dichotomizing a continuous risk indicator. Methods Using data from a representative large multistage longitudinal epidemiological study in a primary care setting in Germany, this paper explores a novel approach for estimating optimal cut-offs of anthropomorphic parameters for predicting type 2 diabetes based on a discontinuity of a regression function in a nonparametric regression framework. Results The resulting cut-off corresponded to values obtained by the Youden Index (maximum of the sum of sensitivity and specificity, minus one, often considered the optimal cut-off in epidemiological and biomedical research. The nonparametric regression based estimator was compared to results obtained by the established methods of the Receiver Operating Characteristic plot in various simulation scenarios and based on bias and root mean square error, yielded excellent finite sample properties. Conclusion It is thus recommended that this nonparametric regression approach be considered as valuable alternative when a continuous indicator has to be dichotomized at the Youden Index for prediction or decision purposes.

  11. Using Mathematica to build Non-parametric Statistical Tables

    Directory of Open Access Journals (Sweden)

    Gloria Perez Sainz de Rozas

    2003-01-01

    Full Text Available In this paper, I present computational procedures to obtian statistical tables. The tables of the asymptotic distribution and the exact distribution of Kolmogorov-Smirnov statistic Dn for one population, the table of the distribution of the runs R, the table of the distribution of Wilcoxon signed-rank statistic W+ and the table of the distribution of Mann-Whitney statistic Ux using Mathematica, Version 3.9 under Window98. I think that it is an interesting cuestion because many statistical packages give the asymptotic significance level in the statistical tests and with these porcedures one can easily calculate the exact significance levels and the left-tail and right-tail probabilities with non-parametric distributions. I have used mathematica to make these calculations because one can use symbolic language to solve recursion relations. It's very easy to generate the format of the tables, and it's possible to obtain any table of the mentioned non-parametric distributions with any precision, not only with the standard parameters more used in Statistics, and without transcription mistakes. Furthermore, using similar procedures, we can generate tables for the following distribution functions: Binomial, Poisson, Hypergeometric, Normal, x2 Chi-Square, T-Student, F-Snedecor, Geometric, Gamma and Beta.

  12. 1st Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Lahiri, S; Politis, Dimitris

    2014-01-01

    This volume is composed of peer-reviewed papers that have developed from the First Conference of the International Society for NonParametric Statistics (ISNPS). This inaugural conference took place in Chalkidiki, Greece, June 15-19, 2012. It was organized with the co-sponsorship of the IMS, the ISI, and other organizations. M.G. Akritas, S.N. Lahiri, and D.N. Politis are the first executive committee members of ISNPS, and the editors of this volume. ISNPS has a distinguished Advisory Committee that includes Professors R.Beran, P.Bickel, R. Carroll, D. Cook, P. Hall, R. Johnson, B. Lindsay, E. Parzen, P. Robinson, M. Rosenblatt, G. Roussas, T. SubbaRao, and G. Wahba. The Charting Committee of ISNPS consists of more than 50 prominent researchers from all over the world.   The chapters in this volume bring forth recent advances and trends in several areas of nonparametric statistics. In this way, the volume facilitates the exchange of research ideas, promotes collaboration among researchers from all over the wo...

  13. Non-parametric Morphologies of Mergers in the Illustris Simulation

    CERN Document Server

    Bignone, Lucas A; Sillero, Emanuel; Pedrosa, Susana E; Pellizza, Leonardo J; Lambas, Diego G

    2016-01-01

    We study non-parametric morphologies of mergers events in a cosmological context, using the Illustris project. We produce mock g-band images comparable to observational surveys from the publicly available Illustris simulation idealized mock images at $z=0$. We then measure non parametric indicators: asymmetry, Gini, $M_{20}$, clumpiness and concentration for a set of galaxies with $M_* >10^{10}$ M$_\\odot$. We correlate these automatic statistics with the recent merger history of galaxies and with the presence of close companions. Our main contribution is to assess in a cosmological framework, the empirically derived non-parametric demarcation line and average time-scales used to determine the merger rate observationally. We found that 98 per cent of galaxies above the demarcation line have a close companion or have experienced a recent merger event. On average, merger signatures obtained from the $G-M_{20}$ criteria anticorrelate clearly with the elapsing time to the last merger event. We also find that the a...

  14. 非参数项目反应理论回顾与展望%The Retrospect and Prospect of Non-parametric Item Response Theory

    Institute of Scientific and Technical Information of China (English)

    陈婧; 康春花; 钟晓玲

    2013-01-01

      相比参数项目反应理论,非参数项目反应理论提供了更吻合实践情境的理论框架。目前非参数项目反应理论研究主要关注参数估计方法及其比较、数据-模型拟合验证等方面,其应用研究则集中于量表修订及个性数据和项目功能差异分析,而在认知诊断理论基础上发展起来的非参数认知诊断理论更是凸显其应用优势。未来研究应更多侧重于非参数项目反应理论的实践应用,对非参数认知诊断理论的研究也值得关注,以充分发挥非参数方法在实践领域的应用优势。%  Compared to parametric item response theory, non-parametric item response theory provide a more appropriate theoretical framework of practice situations. Non-parametric item response theory research focuses on parameter estimation methods and its comparison, data- model fitting verify etc. currently.Its applied research concentrate on scale amendments, personalized data and differential item functioning analysis. Non-parametric cognitive diagnostic theory which based on the parametric cognitive diagnostic theory gives prominence to the advantages of its application.To give full play to the advantages of non-parametric methods in practice,future studies should emphasis on the application of non-parametric item response theory while cognitive diagnosis of the non-parametric study is also worth of attention.

  15. Nonparametric Estimation of Cumulative Incidence Functions for Competing Risks Data with Missing Cause of Failure

    DEFF Research Database (Denmark)

    Effraimidis, Georgios; Dahl, Christian Møller

    In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric...... estimator. A simulation study that serves two purposes is provided. First, it illustrates in details how to implement our proposed nonparametric estimator. Secondly, it facilitates a comparison of the nonparametric estimator to a parametric counterpart based on the estimator of Lu and Liang (2008...

  16. Exploitation of linkage learning in evolutionary algorithms

    CERN Document Server

    Chen, Ying-ping

    2010-01-01

    The exploitation of linkage learning is enhancing the performance of evolutionary algorithms. This monograph examines recent progress in linkage learning, with a series of focused technical chapters that cover developments and trends in the field.

  17. Non-parametric frequency analysis of extreme values for integrated disaster management considering probable maximum events

    Science.gov (United States)

    Takara, K. T.

    2015-12-01

    This paper describes a non-parametric frequency analysis method for hydrological extreme-value samples with a size larger than 100, verifying the estimation accuracy with a computer intensive statistics (CIS) resampling such as the bootstrap. Probable maximum values are also incorporated into the analysis for extreme events larger than a design level of flood control. Traditional parametric frequency analysis methods of extreme values include the following steps: Step 1: Collecting and checking extreme-value data; Step 2: Enumerating probability distributions that would be fitted well to the data; Step 3: Parameter estimation; Step 4: Testing goodness of fit; Step 5: Checking the variability of quantile (T-year event) estimates by the jackknife resampling method; and Step_6: Selection of the best distribution (final model). The non-parametric method (NPM) proposed here can skip Steps 2, 3, 4 and 6. Comparing traditional parameter methods (PM) with the NPM, this paper shows that PM often underestimates 100-year quantiles for annual maximum rainfall samples with records of more than 100 years. Overestimation examples are also demonstrated. The bootstrap resampling can do bias correction for the NPM and can also give the estimation accuracy as the bootstrap standard error. This NPM has advantages to avoid various difficulties in above-mentioned steps in the traditional PM. Probable maximum events are also incorporated into the NPM as an upper bound of the hydrological variable. Probable maximum precipitation (PMP) and probable maximum flood (PMF) can be a new parameter value combined with the NPM. An idea how to incorporate these values into frequency analysis is proposed for better management of disasters that exceed the design level. The idea stimulates more integrated approach by geoscientists and statisticians as well as encourages practitioners to consider the worst cases of disasters in their disaster management planning and practices.

  18. Synthesis of Intermittent-Motion Linkages with Slight Difference in Length Between Links

    Institute of Scientific and Technical Information of China (English)

    CHEN Xin-bo; YU Zhen

    2006-01-01

    The design of intermittent-motion linkages using traditional methods iS usually complicated.In this paper,(1)a new method for realizing intermittent motion,using linkages with a slight difference in length between links,is studied;(2)some new types of intermittent-motion linkages and their software for visual analysis and design are developed;and(3)influences of some design parameters on intermittent.motion characteristics are clarifled.Results confirmed that the theory and the software are useful,making the synthesis of intermittent-motion linkages clear and easy.

  19. Bicoid Signal Extraction with a Selection of Parametric and Nonparametric Signal Processing Techniques

    Institute of Scientific and Technical Information of China (English)

    Zara Ghodsi; Emmanuel Sirimal Silva; Hossein Hassani

    2015-01-01

    The maternal segmentation coordinate gene bicoid plays a significant role during Drosophila embryogenesis. The gradient of Bicoid, the protein encoded by this gene, determines most aspects of head and thorax development. This paper seeks to explore the applicability of a variety of signal processing techniques at extracting bicoid expression signal, and whether these methods can outperform the current model. We evaluate the use of six different powerful and widely-used models representing both parametric and nonparametric signal processing techniques to determine the most efficient method for signal extraction in bicoid. The results are evaluated using both real and simulated data. Our findings show that the Singular Spectrum Analysis technique proposed in this paper outperforms the synthesis diffusion degradation model for filtering the noisy protein profile of bicoid whilst the exponential smoothing technique was found to be the next best alternative followed by the autoregressive integrated moving average.

  20. Non-parametric star formation histories for 5 dwarf spheroidal galaxies of the local group

    CERN Document Server

    Hernández, X; Valls-Gabaud, D; Gilmore, Gerard; Valls-Gabaud, David

    2000-01-01

    We use recent HST colour-magnitude diagrams of the resolved stellar populations of a sample of local dSph galaxies (Carina, LeoI, LeoII, Ursa Minor and Draco) to infer the star formation histories of these systems, $SFR(t)$. Applying a new variational calculus maximum likelihood method which includes a full Bayesian analysis and allows a non-parametric estimate of the function one is solving for, we infer the star formation histories of the systems studied. This method has the advantage of yielding an objective answer, as one need not assume {\\it a priori} the form of the function one is trying to recover. The results are checked independently using Saha's $W$ statistic. The total luminosities of the systems are used to normalize the results into physical units and derive SN type II rates. We derive the luminosity weighted mean star formation history of this sample of galaxies.

  1. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep.

    Science.gov (United States)

    Crainiceanu, Ciprian M; Caffo, Brian S; Di, Chong-Zhi; Punjabi, Naresh M

    2009-06-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS.

  2. Bicoid signal extraction with a selection of parametric and nonparametric signal processing techniques.

    Science.gov (United States)

    Ghodsi, Zara; Silva, Emmanuel Sirimal; Hassani, Hossein

    2015-06-01

    The maternal segmentation coordinate gene bicoid plays a significant role during Drosophila embryogenesis. The gradient of Bicoid, the protein encoded by this gene, determines most aspects of head and thorax development. This paper seeks to explore the applicability of a variety of signal processing techniques at extracting bicoid expression signal, and whether these methods can outperform the current model. We evaluate the use of six different powerful and widely-used models representing both parametric and nonparametric signal processing techniques to determine the most efficient method for signal extraction in bicoid. The results are evaluated using both real and simulated data. Our findings show that the Singular Spectrum Analysis technique proposed in this paper outperforms the synthesis diffusion degradation model for filtering the noisy protein profile of bicoid whilst the exponential smoothing technique was found to be the next best alternative followed by the autoregressive integrated moving average.

  3. Non-parametric Estimation approach in statistical investigation of nuclear spectra

    CERN Document Server

    Jafarizadeh, M A; Sabri, H; Maleki, B Rashidian

    2011-01-01

    In this paper, Kernel Density Estimation (KDE) as a non-parametric estimation method is used to investigate statistical properties of nuclear spectra. The deviation to regular or chaotic dynamics, is exhibited by closer distances to Poisson or Wigner limits respectively which evaluated by Kullback-Leibler Divergence (KLD) measure. Spectral statistics of different sequences prepared by nuclei corresponds to three dynamical symmetry limits of Interaction Boson Model(IBM), oblate and prolate nuclei and also the pairing effect on nuclear level statistics are analyzed (with pure experimental data). KD-based estimated density function, confirm previous predictions with minimum uncertainty (evaluated with Integrate Absolute Error (IAE)) in compare to Maximum Likelihood (ML)-based method. Also, the increasing of regularity degrees of spectra due to pairing effect is reveal.

  4. Identification of quantitative trait loci underlying milk traits in Spanish dairy sheep using linkage plus combined linkage disequilibrium and linkage analysis approaches.

    Science.gov (United States)

    Garcia-Gámez, E; Gutiérrez-Gil, B; Suarez-Vega, A; de la Fuente, L F; Arranz, J J

    2013-09-01

    In this study, 2 procedures were used to analyze a data set from a whole-genome scan, one based on linkage analysis information and the other combing linkage disequilibrium and linkage analysis (LDLA), to determine the quantitative trait loci (QTL) influencing milk production traits in sheep. A total of 1,696 animals from 16 half-sib families were genotyped using the OvineSNP50 BeadChip (Illumina Inc., San Diego, CA) and analysis was performed using a daughter design. Moreover, the same data set has been previously investigated through a genome-wide association (GWA) analysis and a comparison of results from the 3 methods has been possible. The linkage analysis and LDLA methodologies yielded different results, although some significantly associated regions were common to both procedures. The linkage analysis detected 3 overlapping genome-wise significant QTL on sheep chromosome (OAR) 2 influencing milk yield, protein yield, and fat yield, whereas 34 genome-wise significant QTL regions were detected using the LDLA approach. The most significant QTL for protein and fat percentages was detected on OAR3, which was reported in a previous GWA analysis. Both the linkage analysis and LDLA identified many other chromosome-wise significant associations across different sheep autosomes. Additional analyses were performed on OAR2 and OAR3 to determine the possible causality of the most significant polymorphisms identified for these genetic effects by the previously reported GWA analysis. For OAR3, the analyses demonstrated additional genetic proof of the causality previously suggested by our group for a single nucleotide polymorphism located in the α-lactalbumin gene (LALBA). In summary, although the results shown here suggest that in commercial dairy populations, the LDLA method exhibits a higher efficiency to map QTL than the simple linkage analysis or linkage disequilibrium methods, we believe that comparing the 3 analysis methods is the best approach to obtain a global

  5. Fault spacing in the El Teniente Mine, Central Chile, the fold style inversion method, fold segmentation and fault linkage of the Barrancas/Lunlunta-Carrizal anticlinal complex, Mendoza, Argentina

    Science.gov (United States)

    Brooks, Benjamin Armstead

    1999-11-01

    An interval counting technique and standard cumulative statistics, in concert with residual and differential slope analysis, are employed on multiple parallel scanlines to test the applicability of fractal fault spacing at the El Teniente Mine, Central Chile. A negative exponential distribution best describes fault spatial distribution at the mine, while the interval counting method gives deceptively good fits to a fractal distribution. The results are consistent for the majority of the scanlines over thousands of square meters. These data provide an important counterexample to previously studied fractal spacing distributions and suggest that faulting is not a uniquely self-similar process and/or that faulting is not a consistently self-similar process through time. The "Fold Style Inversion" (FSI) method is developed to place quantitative bounds on balanced cross-sections used in the analysis of blind thrust faults. The method employs a discretized dip isogon construction, in combination with Monte Carlo simulations of seismic reflection depth-conversion errors, to assess a data sets' goodness of fit to bulk hangingwall similar or parallel fold geometry. This enables an objective choice to be made between the Arbitrarily Inclined Simple Shear (AISS) and Constant Bed Length (CBL) fault inversion routines which are specific to similar and parallel fold geometry, respectively. The method performs successfully for a variety of synthetic examples including a synthetic seismic line. The FSI method is applied to seismic reflection lines crossing the Barrancas and Lunlunta-Carrizal anticlines, active fault-bend folds in the Andean foreland of Mendoza Province, Argentina, and the proposed site of the 1985 Mw 5.9 Mendoza earthquake. For the Barrancas anticline, FSI analysis establishes a preference for similar fold style whereas no preference can be established for the Lunlunta-Carrizal anticline. With FSI-constrained cross-sections, it is shown that the earthquake most

  6. Analysis of intravenous glucose tolerance test data using parametric and nonparametric modeling: application to a population at risk for diabetes.

    Science.gov (United States)

    Marmarelis, Vasilis Z; Shin, Dae C; Zhang, Yaping; Kautzky-Willer, Alexandra; Pacini, Giovanni; D'Argenio, David Z

    2013-07-01

    Modeling studies of the insulin-glucose relationship have mainly utilized parametric models, most notably the minimal model (MM) of glucose disappearance. This article presents results from the comparative analysis of the parametric MM and a nonparametric Laguerre based Volterra Model (LVM) applied to the analysis of insulin modified (IM) intravenous glucose tolerance test (IVGTT) data from a clinical study of gestational diabetes mellitus (GDM). An IM IVGTT study was performed 8 to 10 weeks postpartum in 125 women who were diagnosed with GDM during their pregnancy [population at risk of developing diabetes (PRD)] and in 39 control women with normal pregnancies (control subjects). The measured plasma glucose and insulin from the IM IVGTT in each group were analyzed via a population analysis approach to estimate the insulin sensitivity parameter of the parametric MM. In the nonparametric LVM analysis, the glucose and insulin data were used to calculate the first-order kernel, from which a diagnostic scalar index representing the integrated effect of insulin on glucose was derived. Both the parametric MM and nonparametric LVM describe the glucose concentration data in each group with good fidelity, with an improved measured versus predicted r² value for the LVM of 0.99 versus 0.97 for the MM analysis in the PRD. However, application of the respective diagnostic indices of the two methods does result in a different classification of 20% of the individuals in the PRD. It was found that the data based nonparametric LVM revealed additional insights about the manner in which infused insulin affects blood glucose concentration. © 2013 Diabetes Technology Society.

  7. Use of land facets to design linkages for climate change.

    Science.gov (United States)

    Brost, Brian M; Beier, Paul

    2012-01-01

    Least-cost modeling for focal species is the most widely used method for designing conservation corridors and linkages. However, these linkages have been based on current species' distributions and land cover, both of which will change with large-scale climate change. One method to develop corridors that facilitate species' shifting distributions is to incorporate climate models into their design. But this approach is enormously complex and prone to error propagation. It also produces outputs at a grain size (km2) coarser than the grain at which conservation decisions are made. One way to avoid these problems is to design linkages for the continuity and interspersion of land facets, or recurring landscape units of relatively uniform topography and soils. This coarse-filter approach aims to conserve the arenas of biological activity rather than the temporary occupants of those arenas. In this paper, we demonstrate how land facets can be defined in a rule-based and adaptable way, and how they can be used for linkage design in the face of climate change. We used fuzzy c-means cluster analysis to define land facets with respect to four topographic variables (elevation, slope angle, solar insolation, and topographic position), and least-cost analysis to design linkages that include one corridor per land facet. To demonstrate the flexibility of our procedures, we designed linkages using land facets in three topographically diverse landscapes in Arizona, USA. Our procedures can use other variables, including soil variables, to define land facets. We advocate using land facets to complement, rather than replace, existing focal species approaches to linkage design. This approach can be used even in regions lacking land cover maps and is not affected by the bias and patchiness common in species occurrence data.

  8. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  9. Views of healthcare professionals to linkage of routinely collected healthcare data: a systematic literature review

    OpenAIRE

    Hopf, Y. M.; Bond, C.; Francis, J.; Haughney, J.; Helms, P. J.

    2014-01-01

    Objective: To review the literature on the views of healthcare professionals to the linkage of healthcare data and to identify any potential barriers and/or facilitators to participation in a data linkage system.\\ud \\ud Methods: Published papers describing the views of healthcare professionals (HCPs) to data sharing and linkage were identified by searches of Medline, EMBASE, SCOPUS, CINAHL, and PsychINFO. The searches were limited to papers published in the English language from 2001 to 2011....

  10. A Non-Parametric Spatial Independence Test Using Symbolic Entropy

    Directory of Open Access Journals (Sweden)

    López Hernández, Fernando

    2008-01-01

    Full Text Available In the present paper, we construct a new, simple, consistent and powerful test forspatial independence, called the SG test, by using symbolic dynamics and symbolic entropyas a measure of spatial dependence. We also give a standard asymptotic distribution of anaffine transformation of the symbolic entropy under the null hypothesis of independencein the spatial process. The test statistic and its standard limit distribution, with theproposed symbolization, are invariant to any monotonuous transformation of the data.The test applies to discrete or continuous distributions. Given that the test is based onentropy measures, it avoids smoothed nonparametric estimation. We include a MonteCarlo study of our test, together with the well-known Moran’s I, the SBDS (de Graaffet al, 2001 and (Brett and Pinkse, 1997 non parametric test, in order to illustrate ourapproach.

  11. Prior processes and their applications nonparametric Bayesian estimation

    CERN Document Server

    Phadia, Eswar G

    2016-01-01

    This book presents a systematic and comprehensive treatment of various prior processes that have been developed over the past four decades for dealing with Bayesian approach to solving selected nonparametric inference problems. This revised edition has been substantially expanded to reflect the current interest in this area. After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process. It subsequently discusses various neutral to right type processes, including gamma and extended gamma, beta and beta-Stacy processes, and then describes the Chinese Restaurant, Indian Buffet and infinite gamma-Poisson processes, which prove to be very useful in areas such as machine learning, information retrieval and featural modeling. Tailfree and P...

  12. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  13. Curve registration by nonparametric goodness-of-fit testing

    CERN Document Server

    Dalalyan, Arnak

    2011-01-01

    The problem of curve registration appears in many different areas of applications ranging from neuroscience to road traffic modeling. In the present work, we propose a nonparametric testing framework in which we develop a generalized likelihood ratio test to perform curve registration. We first prove that, under the null hypothesis, the resulting test statistic is asymptotically distributed as a chi-squared random variable. This result, often referred to as Wilks' phenomenon, provides a natural threshold for the test of a prescribed asymptotic significance level and a natural measure of lack-of-fit in terms of the p-value of the chi squared test. We also prove that the proposed test is consistent, i.e., its power is asymptotically equal to 1. Some numerical experiments on synthetic datasets are reported as well.

  14. Nonparametric forecasting of low-dimensional dynamical systems.

    Science.gov (United States)

    Berry, Tyrus; Giannakis, Dimitrios; Harlim, John

    2015-03-01

    This paper presents a nonparametric modeling approach for forecasting stochastic dynamical systems on low-dimensional manifolds. The key idea is to represent the discrete shift maps on a smooth basis which can be obtained by the diffusion maps algorithm. In the limit of large data, this approach converges to a Galerkin projection of the semigroup solution to the underlying dynamics on a basis adapted to the invariant measure. This approach allows one to quantify uncertainties (in fact, evolve the probability distribution) for nontrivial dynamical systems with equation-free modeling. We verify our approach on various examples, ranging from an inhomogeneous anisotropic stochastic differential equation on a torus, the chaotic Lorenz three-dimensional model, and the Niño-3.4 data set which is used as a proxy of the El Niño Southern Oscillation.

  15. Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.

    Science.gov (United States)

    Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z

    2017-03-01

    A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.

  16. Revealing components of the galaxy population through nonparametric techniques

    CERN Document Server

    Bamford, Steven P; Nichol, Robert C; Miller, Christopher J; Wasserman, Larry; Genovese, Christopher R; Freeman, Peter E

    2008-01-01

    The distributions of galaxy properties vary with environment, and are often multimodal, suggesting that the galaxy population may be a combination of multiple components. The behaviour of these components versus environment holds details about the processes of galaxy development. To release this information we apply a novel, nonparametric statistical technique, identifying four components present in the distribution of galaxy H$\\alpha$ emission-line equivalent-widths. We interpret these components as passive, star-forming, and two varieties of active galactic nuclei. Independent of this interpretation, the properties of each component are remarkably constant as a function of environment. Only their relative proportions display substantial variation. The galaxy population thus appears to comprise distinct components which are individually independent of environment, with galaxies rapidly transitioning between components as they move into denser environments.

  17. Multi-Directional Non-Parametric Analysis of Agricultural Efficiency

    DEFF Research Database (Denmark)

    Balezentis, Tomas

    This thesis seeks to develop methodologies for assessment of agricultural efficiency and employ them to Lithuanian family farms. In particular, we focus on three particular objectives throughout the research: (i) to perform a fully non-parametric analysis of efficiency effects, (ii) to extend...... relative to labour, intermediate consumption and land (in some cases land was not treated as a discretionary input). These findings call for further research on relationships among financial structure, investment decisions, and efficiency in Lithuanian family farms. Application of different techniques...... of stochasticity associated with Lithuanian family farm performance. The former technique showed that the farms differed in terms of the mean values and variance of the efficiency scores over time with some clear patterns prevailing throughout the whole research period. The fuzzy Free Disposal Hull showed...

  18. Parametric or nonparametric? A parametricness index for model selection

    CERN Document Server

    Liu, Wei; 10.1214/11-AOS899

    2012-01-01

    In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...

  19. Absence of a significant linkage between Na(+),K(+)-ATPase subunit (ATP1A3 and ATP1B3) genotypes and bipolar affective disorder in the old-order Amish.

    Science.gov (United States)

    Philibert, R A; Cheung, D; Welsh, N; Damschroder-Williams, P; Thiel, B; Ginns, E I; Gershenfeld, H K

    2001-04-08

    Previous studies provide evidence for a genetic component for susceptibility to bipolar affective disorder (BPAD) in the old-order Amish population. El-Mallakh and Wyatt [1995: Biol Psychiatry 37:235-244] have suggested that the Na(+),K(+)-ATPase may be a candidate gene for BPAD. This study examines the relationship between BPAD in the old-order Amish cohort and the Na(+),K(+)-ATPase alpha1 and beta3 subunit genes (ATP1A3, ATP1B3). A total of 166 sibling pairs were analyzed for linkage via nonparametric methods. Suggestive levels of statistical significance were not reached in any stratification model for affective illness. Overall, the results do not support linkage of bipolar disorder to the Na(+),K(+)-ATPase alpha subunit gene (ATP1A3) and beta subunit gene (ATP1B3) in these old-order Amish families and they show that these Na(+),K(+)-ATPase subunit genes are not major effect genes (>or=fourfold increased genetic risk of disease) for BPAD in the old-order Amish pedigrees. We cannot exclude other genetic variants of the Na(+),K(+)-ATPase hypothesis for BPAD, whereby other loci may modifying Na(+),K(+)-ATPase activity. Copyright 2001 Wiley-Liss, Inc.

  20. Estimation of recombination frequency in genetic linkage studies.

    Science.gov (United States)

    Nordheim, E V; O'Malley, D M; Guries, R P

    1983-09-01

    A binomial-like model is developed that may be used in genetic linkage studies when data are generated by a testcross with parental phase unknown. Four methods of estimation for the recombination frequency are compared for data from a single group and also from several groups; these methods are maximum likelihood, two Bayesian procedures, and an ad hoc technique. The Bayes estimator using a noninformative prior usually has a lower mean squared error than the other estimators and because of this it is the recommended estimator. This estimator appears particularly useful for estimation of recombination frequencies indicative of weak linkage from samples of moderate size. Interval estimates corresponding to this estimator can be obtained numerically by discretizing the posterior distribution, thereby providing researchers with a range of plausible recombination values. Data from a linkage study on pitch pine are used as an example.

  1. Equity and efficiency in private and public education: a nonparametric comparison

    NARCIS (Netherlands)

    L. Cherchye; K. de Witte; E. Ooghe; I. Nicaise

    2007-01-01

    We present a nonparametric approach for the equity and efficiency evaluation of (private and public) primary schools in Flanders. First, we use a nonparametric (Data Envelopment Analysis) model that is specially tailored to assess educational efficiency at the pupil level. The model accounts for the

  2. Parametric and non-parametric modeling of short-term synaptic plasticity. Part II: Experimental study.

    Science.gov (United States)

    Song, Dong; Wang, Zhuo; Marmarelis, Vasilis Z; Berger, Theodore W

    2009-02-01

    This paper presents a synergistic parametric and non-parametric modeling study of short-term plasticity (STP) in the Schaffer collateral to hippocampal CA1 pyramidal neuron (SC) synapse. Parametric models in the form of sets of differential and algebraic equations have been proposed on the basis of the current understanding of biological mechanisms active within the system. Non-parametric Poisson-Volterra models are obtained herein from broadband experimental input-output data. The non-parametric model is shown to provide better prediction of the experimental output than a parametric model with a single set of facilitation/depression (FD) process. The parametric model is then validated in terms of its input-output transformational properties using the non-parametric model since the latter constitutes a canonical and more complete representation of the synaptic nonlinear dynamics. Furthermore, discrepancies between the experimentally-derived non-parametric model and the equivalent non-parametric model of the parametric model suggest the presence of multiple FD processes in the SC synapses. Inclusion of an additional set of FD process in the parametric model makes it replicate better the characteristics of the experimentally-derived non-parametric model. This improved parametric model in turn provides the requisite biological interpretability that the non-parametric model lacks.

  3. Non-parametric tests of productive efficiency with errors-in-variables

    NARCIS (Netherlands)

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  4. Equity and efficiency in private and public education: a nonparametric comparison

    NARCIS (Netherlands)

    Cherchye, L.; de Witte, K.; Ooghe, E.; Nicaise, I.

    2007-01-01

    We present a nonparametric approach for the equity and efficiency evaluation of (private and public) primary schools in Flanders. First, we use a nonparametric (Data Envelopment Analysis) model that is specially tailored to assess educational efficiency at the pupil level. The model accounts for the

  5. Hierarchical Cluster Analysis: Comparison of Three Linkage Measures and Application to Psychological Data

    Directory of Open Access Journals (Sweden)

    Odilia Yim

    2015-02-01

    Full Text Available Cluster analysis refers to a class of data reduction methods used for sorting cases, observations, or variables of a given dataset into homogeneous groups that differ from each other. The present paper focuses on hierarchical agglomerative cluster analysis, a statistical technique where groups are sequentially created by systematically merging similar clusters together, as dictated by the distance and linkage measures chosen by the researcher. Specific distance and linkage measures are reviewed, including a discussion of how these choices can influence the clustering process by comparing three common linkage measures (single linkage, complete linkage, average linkage. The tutorial guides researchers in performing a hierarchical cluster analysis using the SPSS statistical software. Through an example, we demonstrate how cluster analysis can be used to detect meaningful subgroups in a sample of bilinguals by examining various language variables.

  6. Further analysis of previously implicated linkage regions for Alzheimer's disease in affected relative pairs

    Directory of Open Access Journals (Sweden)

    Lannfelt Lars

    2009-12-01

    Full Text Available Abstract Background Genome-wide linkage studies for Alzheimer's disease have implicated several chromosomal regions as potential loci for susceptibility genes. Methods In the present study, we have combined a selection of affected relative pairs (ARPs from the UK and the USA included in a previous linkage study by Myers et al. (Am J Med Genet, 2002, with ARPs from Sweden and Washington University. In this total sample collection of 397 ARPs, we have analyzed linkage to chromosomes 1, 9, 10, 12, 19 and 21, implicated in the previous scan. Results The analysis revealed that linkage to chromosome 19q13 close to the APOE locus increased considerably as compared to the earlier scan. However, linkage to chromosome 10q21, which provided the strongest linkage in the previous scan could not be detected. Conclusion The present investigation provides yet further evidence that 19q13 is the only chromosomal region consistently linked to Alzheimer's disease.

  7. Forecasting of Households Consumption Expenditure with Nonparametric Regression: The Case of Turkey

    Directory of Open Access Journals (Sweden)

    Aydin Noyan

    2016-11-01

    Full Text Available The relationship between household income and expenditure is important for understanding how the shape of the economic dynamics of the households. In this study, the relationship between household consumption expenditure and household disposable income were analyzed by Locally Weighted Scatterplot Smoothing Regression which is a nonparametric method using R programming. This study aimed to determine relationship between variables directly, unlike making any assumptions are commonly used as in the conventional parametric regression. According to the findings, effect on expenditure with increasing of income and household size together increased rapidly at first, and then speed of increase decreased. This increase can be explained by having greater compulsory consumption expenditure relatively in small households. Besides, expenditure is relatively higher in middle and high income levels according to low income level. However, the change in expenditure is limited in middle and is the most limited in high income levels when household size changes.

  8. Using nonparametrics to specify a model to measure the value of travel time

    DEFF Research Database (Denmark)

    Fosgerau, Mogens

    2007-01-01

    Using a range of nonparametric methods, the paper examines the specification of a model to evaluate the willingness-to-pay (WTP) for travel time changes from binomial choice data from a simple time-cost trading experiment. The analysis favours a model with random WTP as the only source...... of randomness over a model with fixed WTP which is linear in time and cost and has an additive random error term. Results further indicate that the distribution of log WTP can be described as a sum of a linear index fixing the location of the log WTP distribution and an independent random variable representing...... unobserved heterogeneity. This formulation is useful for parametric modelling. The index indicates that the WTP varies systematically with income and other individual characteristics. The WTP varies also with the time difference presented in the experiment which is in contradiction of standard utility theory....

  9. Nonparametric Multivariate CUSUM Chart and EWMA Chart Based on Simplicial Data Depth

    Institute of Scientific and Technical Information of China (English)

    Yanting; Li Zhaojun Wang

    2003-01-01

    Simplicial data depth is a useful tool for describing how central a vector is in a multivariate distribution. If the average simplicial depth of a subgroup of observations from a multivariate distribution is too small, it may indicate that a shift in its location or/both scale occurs. In this paper, we propose two new types of nonparametric control charts which are one-sided CUSUM and EWMA control schemes based on simplicial data depth. We also compute the Average Run Length of the CUSUM chart and the EWMA chart by Markov chain method. Recommendations on how to choose the optimal reference value and the smoothing parameter are also given. Comparisons between these two proposed control schemes and the multivariate EWMA are presented.

  10. Non-parametric Reconstruction of Cluster Mass Distribution from Strong Lensing Modelling Abell 370

    CERN Document Server

    Abdel-Salam, H M; Williams, L L R

    1997-01-01

    We describe a new non-parametric technique for reconstructing the mass distribution in galaxy clusters with strong lensing, i.e., from multiple images of background galaxies. The observed positions and redshifts of the images are considered as rigid constraints and through the lens (ray-trace) equation they provide us with linear constraint equations. These constraints confine the mass distribution to some allowed region, which is then found by linear programming. Within this allowed region we study in detail the mass distribution with minimum mass-to-light variation; also some others, such as the smoothest mass distribution. The method is applied to the extensively studied cluster Abell 370, which hosts a giant luminous arc and several other multiply imaged background galaxies. Our mass maps are constrained by the observed positions and redshifts (spectroscopic or model-inferred by previous authors) of the giant arc and multiple image systems. The reconstructed maps obtained for A370 reveal a detailed mass d...

  11. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  12. Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling.

    Science.gov (United States)

    Karsch, Kevin; Liu, Ce; Kang, Sing Bing

    2014-11-01

    We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.

  13. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Science.gov (United States)

    Wang, Yin; Jiang, Han

    2014-01-01

    We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy. PMID:24803950

  14. A multitemporal and non-parametric approach for assessing the impacts of drought on vegetation greenness

    DEFF Research Database (Denmark)

    Carrao, Hugo; Sepulcre, Guadalupe; Horion, Stéphanie Marie Anne F;

    2013-01-01

    for the period between 1998 and 2010. The time-series analysis of vegetation greenness is performed during the growing season with a non-parametric method, namely the seasonal Relative Greenness (RG) of spatially accumulated fAPAR. The Global Land Cover map of 2000 and the GlobCover maps of 2005/2006 and 2009......This study evaluates the relationship between the frequency and duration of meteorological droughts and the subsequent temporal changes on the quantity of actively photosynthesizing biomass (greenness) estimated from satellite imagery on rainfed croplands in Latin America. An innovative non...... Full Data Reanalysis precipitation time-series product, which ranges from January 1901 to December 2010 and is interpolated at the spatial resolution of 1° (decimal degree, DD). Vegetation greenness composites are derived from 10-daily SPOT-VEGETATION images at the spatial resolution of 1/112° DD...

  15. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  16. Semi-parametric regression: Efficiency gains from modeling the nonparametric part

    CERN Document Server

    Yu, Kyusang; Park, Byeong U; 10.3150/10-BEJ296

    2011-01-01

    It is widely admitted that structured nonparametric modeling that circumvents the curse of dimensionality is important in nonparametric estimation. In this paper we show that the same holds for semi-parametric estimation. We argue that estimation of the parametric component of a semi-parametric model can be improved essentially when more structure is put into the nonparametric part of the model. We illustrate this for the partially linear model, and investigate efficiency gains when the nonparametric part of the model has an additive structure. We present the semi-parametric Fisher information bound for estimating the parametric part of the partially linear additive model and provide semi-parametric efficient estimators for which we use a smooth backfitting technique to deal with the additive nonparametric part. We also present the finite sample performances of the proposed estimators and analyze Boston housing data as an illustration.

  17. Forecasting turbulent modes with nonparametric diffusion models: Learning from noisy data

    Science.gov (United States)

    Berry, Tyrus; Harlim, John

    2016-04-01

    In this paper, we apply a recently developed nonparametric modeling approach, the "diffusion forecast", to predict the time-evolution of Fourier modes of turbulent dynamical systems. While the diffusion forecasting method assumes the availability of a noise-free training data set observing the full state space of the dynamics, in real applications we often have only partial observations which are corrupted by noise. To alleviate these practical issues, following the theory of embedology, the diffusion model is built using the delay-embedding coordinates of the data. We show that this delay embedding biases the geometry of the data in a way which extracts the most stable component of the dynamics and reduces the influence of independent additive observation noise. The resulting diffusion forecast model approximates the semigroup solutions of the generator of the underlying dynamics in the limit of large data and when the observation noise vanishes. As in any standard forecasting problem, the forecasting skill depends crucially on the accuracy of the initial conditions. We introduce a novel Bayesian method for filtering the discrete-time noisy observations which works with the diffusion forecast to determine the forecast initial densities. Numerically, we compare this nonparametric approach with standard stochastic parametric models on a wide-range of well-studied turbulent modes, including the Lorenz-96 model in weakly chaotic to fully turbulent regimes and the barotropic modes of a quasi-geostrophic model with baroclinic instabilities. We show that when the only available data is the low-dimensional set of noisy modes that are being modeled, the diffusion forecast is indeed competitive to the perfect model.

  18. Genome-wide linkage scan for primary open angle glaucoma: influences of ancestry and age at diagnosis.

    Directory of Open Access Journals (Sweden)

    Kristy R Crooks

    Full Text Available Primary open-angle glaucoma (POAG is the most common form of glaucoma and one of the leading causes of vision loss worldwide. The genetic etiology of POAG is complex and poorly understood. The purpose of this work is to identify genomic regions of interest linked to POAG. This study is the largest genetic linkage study of POAG performed to date: genomic DNA samples from 786 subjects (538 Caucasian ancestry, 248 African ancestry were genotyped using either the Illumina GoldenGate Linkage 4 Panel or the Illumina Infinium Human Linkage-12 Panel. A total of 5233 SNPs was analyzed in 134 multiplex POAG families (89 Caucasian ancestry, 45 African ancestry. Parametric and non-parametric linkage analyses were performed on the overall dataset and within race-specific datasets (Caucasian ancestry and African ancestry. Ordered subset analysis was used to stratify the data on the basis of age of glaucoma diagnosis. Novel linkage regions were identified on chromosomes 1 and 20, and two previously described loci-GLC1D on chromosome 8 and GLC1I on chromosome 15--were replicated. These data will prove valuable in the context of interpreting results from genome-wide association studies for POAG.

  19. The first genetic linkage map of Eucommia ulmoides

    Indian Academy of Sciences (India)

    Dawei Wang; Yu Li; Long Li; Yongcheng Wei; Zhouqi Li

    2014-04-01

    In accordance with pseudo-testcross strategy, the first genetic linkage map of Eucommia ulmoides Oliv. was constructed by an F1 population of 122 plants using amplified fragment length polymorphism (AFLP) markers. A total of 22 AFLP primer combinations generated 363 polymorphic markers. We selected 289 markers segregating as 1:1 and used them for constructing the parent-specific linkage maps. Among the candidate markers, 127 markers were placed on the maternal map LF and 108 markers on the paternal map Q1. The maternal map LF spanned 1116.1 cM in 14 linkage groups with a mean map distance of 8.78 cM; the paternal map Q1 spanned 929.6 cM in 12 linkage groups with an average spacing of 8.61 cM. The estimated coverage of the genome through two methods was 78.5 and 73.9% for LF, and 76.8 and 71.2% for Q1, respectively. This map is the first linkage map of E. ulmoides and provides a basis for mapping quantitative-trait loci and breeding applications.

  20. Adaptive Particle Filter for Nonparametric Estimation with Measurement Uncertainty in Wireless Sensor Networks.

    Science.gov (United States)

    Li, Xiaofan; Zhao, Yubin; Zhang, Sha; Fan, Xiaopeng

    2016-05-30

    Particle filters (PFs) are widely used for nonlinear signal processing in wireless sensor networks (WSNs). However, the measurement uncertainty makes the WSN observations unreliable to the actual case and also degrades the estimation accuracy of the PFs. In addition to the algorithm design, few works focus on improving the likelihood calculation method, since it can be pre-assumed by a given distribution model. In this paper, we propose a novel PF method, which is based on a new likelihood fusion method for WSNs and can further improve the estimation performance. We firstly use a dynamic Gaussian model to describe the nonparametric features of the measurement uncertainty. Then, we propose a likelihood adaptation method that employs the prior information and a belief factor to reduce the measurement noise. The optimal belief factor is attained by deriving the minimum Kullback-Leibler divergence. The likelihood adaptation method can be integrated into any PFs, and we use our method to develop three versions of adaptive PFs for a target tracking system using wireless sensor network. The simulation and experimental results demonstrate that our likelihood adaptation method has greatly improved the estimation performance of PFs in a high noise environment. In addition, the adaptive PFs are highly adaptable to the environment without imposing computational complexity.

  1. Mapping multiple QTL using linkage disequilibrium and linkage analysis information and multitrait data

    Directory of Open Access Journals (Sweden)

    Goddard Mike E

    2004-05-01

    Full Text Available Abstract A multi-locus QTL mapping method is presented, which combines linkage and linkage disequilibrium (LD information and uses multitrait data. The method assumed a putative QTL at the midpoint of each marker bracket. Whether the putative QTL had an effect or not was sampled using Markov chain Monte Carlo (MCMC methods. The method was tested in dairy cattle data on chromosome 14 where the DGAT1 gene was known to be segregating. The DGAT1 gene was mapped to a region of 0.04 cM, and the effects of the gene were accurately estimated. The fitting of multiple QTL gave a much sharper indication of the QTL position than a single QTL model using multitrait data, probably because the multi-locus QTL mapping reduced the carry over effect of the large DGAT1 gene to adjacent putative QTL positions. This suggests that the method could detect secondary QTL that would, in single point analyses, remain hidden under the broad peak of the dominant QTL. However, no indications for a second QTL affecting dairy traits were found on chromosome 14.

  2. Directed Graphs, Decompositions, and Spatial Linkages

    CERN Document Server

    Shai, Offer; Whiteley, Walter

    2010-01-01

    The decomposition of a system of constraints into small basic components is an important tool of design and analysis. Specifically, the decomposition of a linkage into minimal components is a central tool of analysis and synthesis of linkages. In this paper we prove that every pinned 3-isostatic (minimally rigid) graph (grounded linkage) has a unique decomposition into minimal strongly connected components (in the sense of directed graphs) which we call 3-Assur graphs. This analysis extends the Assur decompositions of plane linkages previously studied in the mathematical and the mechanical engineering literature. These 3-Assur graphs are the central building blocks for all kinematic linkages in 3-space. They share a number of key combinatorial and geometric properties with the 2-Assur graphs, including an associated lower block-triangular decomposition of the pinned rigidity matrix which provides a format for extending the motion induced by inserting one driver in a bottom Assur linkage to the joints of the e...

  3. An improved recommendation algorithm via weakening indirect linkage effect

    Science.gov (United States)

    Chen, Guang; Qiu, Tian; Shen, Xiao-Quan

    2015-07-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. Project supported by the National Natural Science Foundation of China (Grant No. 11175079) and the Young Scientist Training Project of Jiangxi Province, China (Grant No. 20133BCB23017).

  4. Bayesian linkage analysis of categorical traits for arbitrary pedigree designs.

    Directory of Open Access Journals (Sweden)

    Abra Brisbin

    Full Text Available BACKGROUND: Pedigree studies of complex heritable diseases often feature nominal or ordinal phenotypic measurements and missing genetic marker or phenotype data. METHODOLOGY: We have developed a Bayesian method for Linkage analysis of Ordinal and Categorical traits (LOCate that can analyze complex genealogical structure for family groups and incorporate missing data. LOCate uses a Gibbs sampling approach to assess linkage, incorporating a simulated tempering algorithm for fast mixing. While our treatment is Bayesian, we develop a LOD (log of odds score estimator for assessing linkage from Gibbs sampling that is highly accurate for simulated data. LOCate is applicable to linkage analysis for ordinal or nominal traits, a versatility which we demonstrate by analyzing simulated data with a nominal trait, on which LOCate outperforms LOT, an existing method which is designed for ordinal traits. We additionally demonstrate our method's versatility by analyzing a candidate locus (D2S1788 for panic disorder in humans, in a dataset with a large amount of missing data, which LOT was unable to handle. CONCLUSION: LOCate's accuracy and applicability to both ordinal and nominal traits will prove useful to researchers interested in mapping loci for categorical traits.

  5. Biological parametric mapping with robust and non-parametric statistics.

    Science.gov (United States)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M; Landman, Bennett A

    2011-07-15

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, regions of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrices. Recently, biological parametric mapping has extended the widely popular statistical parametric mapping approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and non-parametric regression in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provide a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Pivotal Estimation of Nonparametric Functions via Square-root Lasso

    CERN Document Server

    Belloni, Alexandre; Wang, Lie

    2011-01-01

    In a nonparametric linear regression model we study a variant of LASSO, called square-root LASSO, which does not require the knowledge of the scaling parameter $\\sigma$ of the noise or bounds for it. This work derives new finite sample upper bounds for prediction norm rate of convergence, $\\ell_1$-rate of converge, $\\ell_\\infty$-rate of convergence, and sparsity of the square-root LASSO estimator. A lower bound for the prediction norm rate of convergence is also established. In many non-Gaussian noise cases, we rely on moderate deviation theory for self-normalized sums and on new data-dependent empirical process inequalities to achieve Gaussian-like results provided log p = o(n^{1/3}) improving upon results derived in the parametric case that required log p = O(log n). In addition, we derive finite sample bounds on the performance of ordinary least square (OLS) applied tom the model selected by square-root LASSO accounting for possible misspecification of the selected model. In particular, we provide mild con...

  7. Separating environmental efficiency into production and abatement efficiency. A nonparametric model with application to U.S. power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hampf, Benjamin

    2011-08-15

    In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.

  8. The Barley Chromosome 5 Linkage Map

    DEFF Research Database (Denmark)

    Jensen, J.; Jørgensen, Jørgen Helms

    1975-01-01

    The literature is surveyed for data on recombination between loci on chromosome 5 of barley; 13 loci fall into the category “mapped” loci, more than 20 into the category “associated” loci and nine into the category “loci once suggested to be on chromosome 5”. A procedure was developed...... for estimating a linkage map; it involves (1) transformation by the Kosambi mapping function of the available recombination percentages to additive map distances, (2) calculations of a set of map distances from the transformed recombination percentages by a maximum likelihood method in which all the available...... data are utilized jointly, and (3) omission of inconsistent data and determination of the most likely order of the loci. This procedure was applied to the 42 recombination percentages available for the 13 “mapped” loci. Due to inconsistencies 14 of the recombination percentages and, therefore, two...

  9. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories.

  10. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data.

    Science.gov (United States)

    Maydeu-Olivares, Albert

    2005-04-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in their data. To verify this conjecture, we compare the fit of these models to the Social Problem Solving Inventory-Revised, whose scales were designed to be unidimensional. A calibration and a cross-validation sample of new observations were used. We also included the following parametric models in the comparison: Bock's nominal model, Masters' partial credit model, and Thissen and Steinberg's extension of the latter. All models were estimated using full information maximum likelihood. We also included in the comparison a normal ogive model version of Samejima's model estimated using limited information estimation. We found that for all scales Samejima's model outperformed all other parametric IRT models in both samples, regardless of the estimation method employed. The non-parametric model outperformed all parametric models in the calibration sample. However, the graded model outperformed MFS in the cross-validation sample in some of the scales. We advocate employing the graded model estimated using limited information methods in modeling Likert-type data, as these methods are more versatile than full information methods to capture the multidimensionality that is generally present in personality data.

  11. Linkages between NAMA - LEDS - MRV

    DEFF Research Database (Denmark)

    Agyemang-Bonsu, William; Benioff, Ron; Cox, Sadie

    Low Emission Development Strategies (LEDS), Nationally Appropriate Mitigation Actions (NAMAs) and Monitoring, Reporting and Verification (MRV) are three of the key conceptual components emerging as part of the global architecture for a new climate agreement by 2015. The three components are devel......Low Emission Development Strategies (LEDS), Nationally Appropriate Mitigation Actions (NAMAs) and Monitoring, Reporting and Verification (MRV) are three of the key conceptual components emerging as part of the global architecture for a new climate agreement by 2015. The three components...... how the three components are conceptually interlinked. Identifying the linkages can inform the work on each component and strengthen coordination of work in the context of the three big partnerships; the International Partnership on Mitigation and MRV, the LEDS Global Partnership and the NAMA...

  12. Analysis of Linkage Effects among Currency Networks Using REER Data

    Directory of Open Access Journals (Sweden)

    Haishu Qiao

    2015-01-01

    Full Text Available We modeled the currency networks through the use of REER (real effective exchange rate instead of a bilateral exchange rate in order to overcome the confusion in selecting base currencies. Based on the MST (minimum spanning tree approach and the rolling-window method, we constructed time-varying and correlation-based networks with which we investigate the linkage effects among different currencies. In particular, and as the source of empirical data, we chose the monthly REER data for a set of 61 major currencies during the period from 1994 to 2014. The study demonstrated that obvious linkage effects existed among currency networks and the euro (EUR was confirmed as the predominant world currency. Additionally, we used the rolling-window method to investigate the stability of linkage effects, doing so by calculating the mean correlations and mean distances as well as the normalized tree length and degrees of those currencies. The results showed that financial crises during the study period had a great effect on the currency network’s topology structure and led to more clustered currency networks. Our results suggested that it is more appropriate to estimate the linkage effects among currency networks through the use of REER data.

  13. Validation of an instrument to measure inter-organisational linkages in general practice

    Directory of Open Access Journals (Sweden)

    Cheryl Amoroso

    2007-11-01

    Full Text Available Purpose: Linkages between general medical practices and external services are important for high quality chronic disease care. The purpose of this research is to describe the development, evaluation and use of a brief tool that measures the comprehensiveness and quality of a general practice’s linkages with external providers for the management of patients with chronic disease. In this study, clinical linkages are defined as the communication, support, and referral arrangements between services for the care and assistance of patients with chronic disease. Methods: An interview to measure surgery-level (rather than individual clinician-level clinical linkages was developed, piloted, reviewed, and evaluated with 97 Australian general practices. Two validated survey instruments were posted to patients, and a survey of locally available services was developed and posted to participating Divisions of General Practice (support organisations. Hypotheses regarding internal validity, association with local services, and patient satisfaction were tested using factor analysis, logistic regression and multilevel regression models. Results: The resulting General Practice Clinical Linkages Interview (GP-CLI is a nine-item tool with three underlying factors: referral and advice linkages, shared care and care planning linkages, and community access and awareness linkages. Local availability of chronic disease services has no affect on the comprehensiveness of services with which practices link, however comprehensiveness of clinical linkages has an association with patient assessment of access, receptionist services, and of continuity of care in their general practice. Conclusions: The GP-CLI may be useful to researchers examining comparable health care systems for measuring the comprehensiveness and quality of linkages at a general practice-level with related services, possessing both internal and external validity. The tool can be used with large samples

  14. Strong Convergence of Partitioning Estimation for Nonparametric Regression Function under Dependence Samples

    Institute of Scientific and Technical Information of China (English)

    LINGNeng-xiang; DUXue-qiao

    2005-01-01

    In this paper, we study the strong consistency for partitioning estimation of regression function under samples that axe φ-mixing sequences with identically distribution.Key words: nonparametric regression function; partitioning estimation; strong convergence;φ-mixing sequences.

  15. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  16. Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models

    CERN Document Server

    De Blasi, Pierpaolo; Lau, John W; 10.3150/09-BEJ233

    2011-01-01

    This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit (MMNL) model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution $G$. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution $G$, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an $L_1$-type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different te...

  17. Nonparametric Monitoring for Geotechnical Structures Subject to Long-Term Environmental Change

    Directory of Open Access Journals (Sweden)

    Hae-Bum Yun

    2011-01-01

    Full Text Available A nonparametric, data-driven methodology of monitoring for geotechnical structures subject to long-term environmental change is discussed. Avoiding physical assumptions or excessive simplification of the monitored structures, the nonparametric monitoring methodology presented in this paper provides reliable performance-related information particularly when the collection of sensor data is limited. For the validation of the nonparametric methodology, a field case study was performed using a full-scale retaining wall, which had been monitored for three years using three tilt gauges. Using the very limited sensor data, it is demonstrated that important performance-related information, such as drainage performance and sensor damage, could be disentangled from significant daily, seasonal and multiyear environmental variations. Extensive literature review on recent developments of parametric and nonparametric data processing techniques for geotechnical applications is also presented.

  18. Nonparametric TOA estimators for low-resolution IR-UWB digital receiver

    Institute of Scientific and Technical Information of China (English)

    Yanlong Zhang; Weidong Chen

    2015-01-01

    Nonparametric time-of-arrival (TOA) estimators for im-pulse radio ultra-wideband (IR-UWB) signals are proposed. Non-parametric detection is obviously useful in situations where de-tailed information about the statistics of the noise is unavailable or not accurate. Such TOA estimators are obtained based on condi-tional statistical tests with only a symmetry distribution assumption on the noise probability density function. The nonparametric es-timators are attractive choices for low-resolution IR-UWB digital receivers which can be implemented by fast comparators or high sampling rate low resolution analog-to-digital converters (ADCs), in place of high sampling rate high resolution ADCs which may not be available in practice. Simulation results demonstrate that nonparametric TOA estimators provide more effective and robust performance than typical energy detection (ED) based estimators.

  19. Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts

    Science.gov (United States)

    AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.

    2014-12-01

    Wind energy is increasing in importance as a renewable energy source due to its potential role in reducing carbon emissions. It is a safe, clean, and inexhaustible source of energy. The amount of wind energy generated by wind turbines is closely related to the wind speed. Wind speed forecasting plays a vital role in the wind energy sector in terms of wind turbine optimal operation, wind energy dispatch and scheduling, efficient energy harvesting etc. It is also considered during planning, design, and assessment of any proposed wind project. Therefore, accurate prediction of wind speed carries a particular importance and plays significant roles in the wind industry. Many methods have been proposed in the literature for short-term wind speed forecasting. These methods are usually based on modeling historical fixed time intervals of the wind speed data and using it for future prediction. The methods mainly include statistical models such as ARMA, ARIMA model, physical models for instance numerical weather prediction and artificial Intelligence techniques for example support vector machine and neural networks. In this paper, we are interested in estimating hourly wind speed measures in United Arab Emirates (UAE). More precisely, we predict hourly wind speed using a nonparametric kernel estimation of the regression and volatility functions pertaining to nonlinear autoregressive model with ARCH model, which includes unknown nonlinear regression function and volatility function already discussed in the literature. The unknown nonlinear regression function describe the dependence between the value of the wind speed at time t and its historical data at time t -1, t - 2, … , t - d. This function plays a key role to predict hourly wind speed process. The volatility function, i.e., the conditional variance given the past, measures the risk associated to this prediction. Since the regression and the volatility functions are supposed to be unknown, they are estimated using

  20. Extent of linkage disequilibrium in chicken

    NARCIS (Netherlands)

    Aerts, J.; Megens, H.J.W.C.; Veenendaal, T.; Ovcharenko, I.; Crooijmans, R.P.M.A.; Gordon, L.; Stubbs, L.; Groenen, M.A.M.; Rodoinov, A.; Gaginskaya, E.

    2007-01-01

    Many of the economically important traits in chicken are multifactorial and governed by multiple genes located at different quantitative trait loci (QTLs). The optimal marker density to identify these QTLs in linkage and association studies is largely determined by the extent of linkage

  1. Examples of the Application of Nonparametric Information Geometry to Statistical Physics

    Directory of Open Access Journals (Sweden)

    Giovanni Pistone

    2013-09-01

    Full Text Available We review a nonparametric version of Amari’s information geometry in which the set of positive probability densities on a given sample space is endowed with an atlas of charts to form a differentiable manifold modeled on Orlicz Banach spaces. This nonparametric setting is used to discuss the setting of typical problems in machine learning and statistical physics, such as black-box optimization, Kullback-Leibler divergence, Boltzmann-Gibbs entropy and the Boltzmann equation.

  2. Economic decision making and the application of nonparametric prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  3. The Effects of Sample Size on Expected Value, Variance and Fraser Efficiency for Nonparametric Independent Two Sample Tests

    Directory of Open Access Journals (Sweden)

    Ismet DOGAN

    2015-10-01

    Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.

  4. Preliminary genetic linkage map of Indian major carp, Labeo rohita (Hamilton 1822) based on microsatellite markers

    Indian Academy of Sciences (India)

    L. Sahoo; A. Patel; B. P. Sahu; S. Mitra; P. K. Meher; K. D. Mahapatra; S. K. Dash; P. Jayasankar; P. Das

    2015-06-01

    Linkage map with wide marker coverage is an essential resource for genetic improvement study for any species. Sex-averaged genetic linkage map of Labeo rohita, popularly known as ‘rohu’, widely cultured in the Indian subcontinent, was developed by placing 68 microsatellite markers generated by a simplified method. The parents and their F1 progeny (92 individuals) were used as segregating populations. The genetic linkage map spans a sex-averaged total length of 1462.2 cM, in 25 linkage groups. The genome length of rohu was estimated to be 3087.9 cM. This genetic linkage map may facilitate systematic searches of the genome to identify genes associated with commercially important characters and marker-assisted selection programmes of this species.

  5. THE DARK MATTER PROFILE OF THE MILKY WAY: A NON-PARAMETRIC RECONSTRUCTION

    Energy Technology Data Exchange (ETDEWEB)

    Pato, Miguel [The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, AlbaNova, SE-106 91 Stockholm (Sweden); Iocco, Fabio [ICTP South American Institute for Fundamental Research, and Instituto de Física Teórica—Universidade Estadual Paulista (UNESP), Rua Dr. Bento Teobaldo Ferraz 271, 01140-070 São Paulo, SP (Brazil)

    2015-04-10

    We present the results of a new, non-parametric method to reconstruct the Galactic dark matter profile directly from observations. Using the latest kinematic data to track the total gravitational potential and the observed distribution of stars and gas to set the baryonic component, we infer the dark matter contribution to the circular velocity across the Galaxy. The radial derivative of this dynamical contribution is then estimated to extract the dark matter profile. The innovative feature of our approach is that it makes no assumption on the functional form or shape of the profile, thus allowing for a clean determination with no theoretical bias. We illustrate the power of the method by constraining the spherical dark matter profile between 2.5 and 25 kpc away from the Galactic center. The results show that the proposed method, free of widely used assumptions, can already be applied to pinpoint the dark matter distribution in the Milky Way with competitive accuracy, and paves the way for future developments.

  6. Nonparametric Feature Matching Based Conditional Random Fields for Gesture Recognition from Multi-Modal Video.

    Science.gov (United States)

    Chang, Ju Yong

    2016-08-01

    We present a new gesture recognition method that is based on the conditional random field (CRF) model using multiple feature matching. Our approach solves the labeling problem, determining gesture categories and their temporal ranges at the same time. A generative probabilistic model is formalized and probability densities are nonparametrically estimated by matching input features with a training dataset. In addition to the conventional skeletal joint-based features, the appearance information near the active hand in an RGB image is exploited to capture the detailed motion of fingers. The estimated likelihood function is then used as the unary term for our CRF model. The smoothness term is also incorporated to enforce the temporal coherence of our solution. Frame-wise recognition results can then be obtained by applying an efficient dynamic programming technique. To estimate the parameters of the proposed CRF model, we incorporate the structured support vector machine (SSVM) framework that can perform efficient structured learning by using large-scale datasets. Experimental results demonstrate that our method provides effective gesture recognition results for challenging real gesture datasets. By scoring 0.8563 in the mean Jaccard index, our method has obtained the state-of-the-art results for the gesture recognition track of the 2014 ChaLearn Looking at People (LAP) Challenge.

  7. Nonparametric signal processing validation in T-wave alternans detection and estimation.

    Science.gov (United States)

    Goya-Esteban, R; Barquero-Pérez, O; Blanco-Velasco, M; Caamaño-Fernández, A J; García-Alberola, A; Rojo-Álvarez, J L

    2014-04-01

    Although a number of methods have been proposed for T-Wave Alternans (TWA) detection and estimation, their performance strongly depends on their signal processing stages and on their free parameters tuning. The dependence of the system quality with respect to the main signal processing stages in TWA algorithms has not yet been studied. This study seeks to optimize the final performance of the system by successive comparisons of pairs of TWA analysis systems, with one single processing difference between them. For this purpose, a set of decision statistics are proposed to evaluate the performance, and a nonparametric hypothesis test (from Bootstrap resampling) is used to make systematic decisions. Both the temporal method (TM) and the spectral method (SM) are analyzed in this study. The experiments were carried out in two datasets: first, in semisynthetic signals with artificial alternant waves and added noise; second, in two public Holter databases with different documented risk of sudden cardiac death. For semisynthetic signals (SNR = 15 dB), after the optimization procedure, a reduction of 34.0% (TM) and 5.2% (SM) of the power of TWA amplitude estimation errors was achieved, and the power of error probability was reduced by 74.7% (SM). For Holter databases, appropriate tuning of several processing blocks, led to a larger intergroup separation between the two populations for TWA amplitude estimation. Our proposal can be used as a systematic procedure for signal processing block optimization in TWA algorithmic implementations.

  8. COLOR IMAGE RETRIEVAL BASED ON NON-PARAMETRIC STATISTICAL TESTS OF HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    R. Shekhar

    2016-09-01

    Full Text Available A novel method for color image retrieval, based on statistical non-parametric tests such as twosample Wald Test for equality of variance and Man-Whitney U test, is proposed in this paper. The proposed method tests the deviation, i.e. distance in terms of variance between the query and target images; if the images pass the test, then it is proceeded to test the spectrum of energy, i.e. distance between the mean values of the two images; otherwise, the test is dropped. If the query and target images pass the tests then it is inferred that the two images belong to the same class, i.e. both the images are same; otherwise, it is assumed that the images belong to different classes, i.e. both images are different. The proposed method is robust for scaling and rotation, since it adjusts itself and treats either the query image or the target image is the sample of other.

  9. Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines

    Science.gov (United States)

    Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2016-10-01

    Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th

  10. Hereditary spastic paraplegia: LOD-score considerations for confirmation of linkage in a heterogeneous trait

    Energy Technology Data Exchange (ETDEWEB)

    Dube, M.P.; Kibar, Z.; Rouleau, G.A. [McGill Univ., Quebec (Canada)] [and others

    1997-03-01

    Hereditary spastic paraplegia (HSP) is a degenerative disorder of the motor system, defined by progressive weakness and spasticity of the lower limbs. HSP may be inherited as an autosomal dominant (AD), autosomal recessive, or an X-linked trait. AD HSP is genetically heterogeneous, and three loci have been identified so far: SPG3 maps to chromosome 14q, SPG4 to 2p, and SPG4a to 15q. We have undertaken linkage analysis with 21 uncomplicated AD families to the three AD HSP loci. We report significant linkage for three of our families to the SPG4 locus and exclude several families by multipoint linkage. We used linkage information from several different research teams to evaluate the statistical probability of linkage to the SPG4 locus for uncomplicated AD HSP families and established the critical LOD-score value necessary for confirmation of linkage to the SPG4 locus from Bayesian statistics. In addition, we calculated the empirical P-values for the LOD scores obtained with all families with computer simulation methods. Power to detect significant linkage, as well as type I error probabilities, were evaluated. This combined analytical approach permitted conclusive linkage analyses on small to medium-size families, under the restrictions of genetic heterogeneity. 19 refs., 1 fig., 1 tab.

  11. Transmission test for linkage disequilibrium: The insulin gene region and insulin-dependent diabetes mellitus (IDDM)

    Energy Technology Data Exchange (ETDEWEB)

    Spielman, R.S.; McGinnis, R.E. (Univ. of Pennsylvania School of Medicine, Philadelphia (United States)); Ewens, W.J. (Univ. of Pennsylvania, Philadelphia (United States))

    1993-03-01

    A population association has consistently been observed between insulin-dependent diabetes mellitus (IDDM) and the class 1 alleles of the region of tandem-repeat DNA (5[prime] flanking polymorphism [5[prime]FP])adjacent to the insulin gene on chromosome 11p. This finding suggests that the insulin gene region contains a gene or genes contributing to IDDM susceptibility. However, several studies that have sought to show linkage with IDDM by testing for cosegregation in affected sib pairs have failed to find evidence for linkage. As means for identifying genes for complex diseases, both the association and the affected-sib-pairs approaches have limitations. It is well known that population association between a disease and a genetic marker can arise as an artifact of population structure, even in the absence of linkage. On the other hand, linkage studies with modest numbers of affected sib pairs may fail to detect linkage, especially if there is linkage heterogeneity. The authors consider an alternative method to test for linkage with a genetic marker when population association has been found. Using data from families with at least one affected child, they evaluate the transmission of the associated marker allele from a heterozygous parent to an affected offspring. This approach has been used by several investigators, but the statistical properties of the method as a test for linkage have not been investigated. In the present paper they describe the statistical basis for this transmission test for linkage disequilibrium (transmission/disequilibrium test [TDT]). They then show the relationship of this test to tests of cosegregation that are based on the proportion of haplotypes or genes identical by descent in affected sibs. The TDT provides strong evidence for linkage between the 5[prime]FP and susceptibility to IDDM. 27 refs., 6 tabs.

  12. Genome-wide linkage scan identifies two novel genetic loci for coronary artery disease: in GeneQuest families.

    Directory of Open Access Journals (Sweden)

    Hanxiang Gao

    Full Text Available Coronary artery disease (CAD is the leading cause of death worldwide. Recent genome-wide association studies (GWAS identified >50 common variants associated with CAD or its complication myocardial infarction (MI, but collectively they account for <20% of heritability, generating a phenomena of "missing heritability". Rare variants with large effects may account for a large portion of missing heritability. Genome-wide linkage studies of large families and follow-up fine mapping and deep sequencing are particularly effective in identifying rare variants with large effects. Here we show results from a genome-wide linkage scan for CAD in multiplex GeneQuest families with early onset CAD and MI. Whole genome genotyping was carried out with 408 markers that span the human genome by every 10 cM and linkage analyses were performed using the affected relative pair analysis implemented in GENEHUNTER. Affected only nonparametric linkage (NPL analysis identified two novel CAD loci with highly significant evidence of linkage on chromosome 3p25.1 (peak NPL  = 5.49 and 3q29 (NPL  = 6.84. We also identified four loci with suggestive linkage on 9q22.33, 9q34.11, 17p12, and 21q22.3 (NPL  = 3.18-4.07. These results identify novel loci for CAD and provide a framework for fine mapping and deep sequencing to identify new susceptibility genes and novel variants associated with risk of CAD.

  13. On The Robustness of z=0-1 Galaxy Size Measurements Through Model and Non-Parametric Fits

    CERN Document Server

    Mosleh, Moein; Franx, Marijn

    2013-01-01

    We present the size-stellar mass relations of nearby (z=0.01-0.02) SDSS galaxies, for samples selected by color, morphology, Sersic index n, and specific star formation rate. Several commonly-employed size measurement techniques are used, including single Sersic fits, two-component Sersic models and a non-parametric method. Through simple simulations we show that the non-parametric and two-component Sersic methods provide the most robust effective radius measurements, while those based on single Sersic profiles are often overestimates, especially for massive red/early-type galaxies. Using our robust sizes, we show that for all sub-samples, the mass-size relations are shallow at low stellar masses and steepen above ~3-4 x 10^{10}\\Msun. The mass-size relations for galaxies classified as late-type, low-n, and star-forming are consistent with each other, while blue galaxies follow a somewhat steeper relation. The mass-size relations of early-type, high-n, red, and quiescent galaxies all agree with each other but ...

  14. Genomewide linkage analysis of stature in multiple populations reveals several regions with evidence of linkage to adult height.

    Science.gov (United States)

    Hirschhorn, J N; Lindgren, C M; Daly, M J; Kirby, A; Schaffner, S F; Burtt, N P; Altshuler, D; Parker, A; Rioux, J D; Platko, J; Gaudet, D; Hudson, T J; Groop, L C; Lander, E S

    2001-07-01

    Genomewide linkage analysis has been extremely successful at identification of the genetic variation underlying single-gene disorders. However, linkage analysis has been less successful for common human diseases and other complex traits in which multiple genetic and environmental factors interact to influence disease risk. We hypothesized that a highly heritable complex trait, in which the contribution of environmental factors was relatively limited, might be more amenable to linkage analysis. We therefore chose to study stature (adult height), for which heritability is approximately 75%-90% (Phillips and Matheny 1990; Carmichael and McGue 1995; Preece 1996; Silventoinen et al. 2000). We reanalyzed genomewide scans from four populations for which genotype and height data were available, using a variance-components method implemented in GENEHUNTER 2.0 (Pratt et al. 2000). The populations consisted of 408 individuals in 58 families from the Botnia region of Finland, 753 individuals in 183 families from other parts of Finland, 746 individuals in 179 families from Southern Sweden, and 420 individuals in 63 families from the Saguenay-Lac-St.-Jean region of Quebec. Four regions showed evidence of linkage to stature: 6q24-25, multipoint LOD score 3.85 at marker D6S1007 in Botnia (genomewide Pgenetically tractable and provide insight into the genetic architecture of complex traits.

  15. Detection of QTL for Carcass Quality on Chromosome 6 by Exploiting Linkage and Linkage Disequilibrium in Hanwoo

    Directory of Open Access Journals (Sweden)

    J.-H. Lee

    2012-01-01

    Full Text Available The purpose of this study was to improve mapping power and resolution for the QTL influencing carcass quality in Hanwoo, which was previously detected on the bovine chromosome (BTA 6. A sample of 427 steers were chosen, which were the progeny from 45 Korean proven sires in the Hanwoo Improvement Center, Seosan, Korea. The samples were genotyped with the set of 2,535 SNPs on BTA6 that were imbedded in the Illumina bovine 50 k chip. A linkage disequilibrium variance component mapping (LDVCM method, which exploited both linkage between sires and their steers and population-wide linkage disequilibrium, was applied to detect QTL for four carcass quality traits. Fifteen QTL were detected at 0.1% comparison-wise level, for which five, three, five, and two QTL were associated with carcass weight (CWT, backfat thickness (BFT, longissimus dorsi muscle area (LMA, and marbling score (Marb, respectively. The number of QTL was greater compared with our previous results, in which twelve QTL for carcass quality were detected on the BTA6 in the same population by applying other linkage disequilibrium mapping approaches. One QTL for LMA was detected on the distal region (110,285,672 to 110,633,096 bp with the most significant evidence for linkage (p<10−5. Another QTL that was detected on the proximal region (33,596,515 to 33,897,434 bp was pleiotrophic, i.e. influencing CWT, BFT, and LMA. Our results suggest that the LDVCM is a good alternative method for QTL fine-mapping in detection and characterization of QTL.

  16. Non-parametric three-way mixed ANOVA with aligned rank tests.

    Science.gov (United States)

    Oliver-Rodríguez, Juan C; Wang, X T

    2015-02-01

    Research problems that require a non-parametric analysis of multifactor designs with repeated measures arise in the behavioural sciences. There is, however, a lack of available procedures in commonly used statistical packages. In the present study, a generalization of the aligned rank test for the two-way interaction is proposed for the analysis of the typical sources of variation in a three-way analysis of variance (ANOVA) with repeated measures. It can be implemented in the usual statistical packages. Its statistical properties are tested by using simulation methods with two sample sizes (n = 30 and n = 10) and three distributions (normal, exponential and double exponential). Results indicate substantial increases in power for non-normal distributions in comparison with the usual parametric tests. Similar levels of Type I error for both parametric and aligned rank ANOVA were obtained with non-normal distributions and large sample sizes. Degrees-of-freedom adjustments for Type I error control in small samples are proposed. The procedure is applied to a case study with 30 participants per group where it detects gender differences in linguistic abilities in blind children not shown previously by other methods.

  17. Non-parametric Bayesian mixture of sparse regressions with application towards feature selection for statistical downscaling

    Directory of Open Access Journals (Sweden)

    D. Das

    2014-04-01

    Full Text Available Climate projections simulated by Global Climate Models (GCM are often used for assessing the impacts of climate change. However, the relatively coarse resolutions of GCM outputs often precludes their application towards accurately assessing the effects of climate change on finer regional scale phenomena. Downscaling of climate variables from coarser to finer regional scales using statistical methods are often performed for regional climate projections. Statistical downscaling (SD is based on the understanding that the regional climate is influenced by two factors – the large scale climatic state and the regional or local features. A transfer function approach of SD involves learning a regression model which relates these features (predictors to a climatic variable of interest (predictand based on the past observations. However, often a single regression model is not sufficient to describe complex dynamic relationships between the predictors and predictand. We focus on the covariate selection part of the transfer function approach and propose a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP, for simultaneous clustering and discovery of covariates within the clusters while automatically finding the number of clusters. Sparse linear models are parsimonious and hence relatively more generalizable than non-sparse alternatives, and lends to domain relevant interpretation. Applications to synthetic data demonstrate the value of the new approach and preliminary results related to feature selection for statistical downscaling shows our method can lead to new insights.

  18. Nonparametric Problem-Space Clustering: Learning Efficient Codes for Cognitive Control Tasks

    Directory of Open Access Journals (Sweden)

    Domenico Maisto

    2016-02-01

    Full Text Available We present an information-theoretic method permitting one to find structure in a problem space (here, in a spatial navigation domain and cluster it in ways that are convenient to solve different classes of control problems, which include planning a path to a goal from a known or an unknown location, achieving multiple goals and exploring a novel environment. Our generative nonparametric approach, called the generative embedded Chinese restaurant process (geCRP, extends the family of Chinese restaurant process (CRP models by introducing a parameterizable notion of distance (or kernel between the states to be clustered together. By using different kernels, such as the the conditional probability or joint probability of two states, the same geCRP method clusters the environment in ways that are more sensitive to different control-related information, such as goal, sub-goal and path information. We perform a series of simulations in three scenarios—an open space, a grid world with four rooms and a maze having the same structure as the Hanoi Tower—in order to illustrate the characteristics of the different clusters (obtained using different kernels and their relative benefits for solving planning and control problems.

  19. A Non-Parametric Delphi Approach to Foster Innovation Policy Debate in Spain

    Directory of Open Access Journals (Sweden)

    Juan Carlos Salazar-Elena

    2016-05-01

    Full Text Available The aim of this paper is to identify some changes needed in Spain’s innovation policy to fill the gap between its innovation results and those of other European countries in lieu of sustainable leadership. To do this we apply the Delphi methodology to experts from academia, business, and government. To overcome the shortcomings of traditional descriptive methods, we develop an inferential analysis by following a non-parametric bootstrap method which enables us to identify important changes that should be implemented. Particularly interesting is the support found for improving the interconnections among the relevant agents of the innovation system (instead of focusing exclusively in the provision of knowledge and technological inputs through R and D activities, or the support found for “soft” policy instruments aimed at providing a homogeneous framework to assess the innovation capabilities of firms (e.g., for funding purposes. Attention to potential innovators among small and medium enterprises (SMEs and traditional industries is particularly encouraged by experts.

  20. Nonparametric 3D map of the IGM using the Lyman-alpha forest

    CERN Document Server

    Cisewski, Jessi; Freeman, Peter E; Genovese, Christopher R; Khandai, Nishikanta; Ozbek, Melih; Wasserman, Larry

    2014-01-01

    Visualizing the high-redshift Universe is difficult due to the dearth of available data; however, the Lyman-alpha forest provides a means to map the intergalactic medium at redshifts not accessible to large galaxy surveys. Large-scale structure surveys, such as the Baryon Oscillation Spectroscopic Survey (BOSS), have collected quasar (QSO) spectra that enable the reconstruction of HI density fluctuations. The data fall on a collection of lines defined by the lines-of-sight (LOS) of the QSO, and a major issue with producing a 3D reconstruction is determining how to model the regions between the LOS. We present a method that produces a 3D map of this relatively uncharted portion of the Universe by employing local polynomial smoothing, a nonparametric methodology. The performance of the method is analyzed on simulated data that mimics the varying number of LOS expected in real data, and then is applied to a sample region selected from BOSS. Evaluation of the reconstruction is assessed by considering various feat...