WorldWideScience

Sample records for nonparametric frontier model

  1. Stochastic semi-nonparametric frontier estimation of electricity distribution networks: Application of the StoNED method in the Finnish regulatory model

    International Nuclear Information System (INIS)

    Kuosmanen, Timo

    2012-01-01

    Electricity distribution network is a prime example of a natural local monopoly. In many countries, electricity distribution is regulated by the government. Many regulators apply frontier estimation techniques such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) as an integral part of their regulatory framework. While more advanced methods that combine nonparametric frontier with stochastic error term are known in the literature, in practice, regulators continue to apply simplistic methods. This paper reports the main results of the project commissioned by the Finnish regulator for further development of the cost frontier estimation in their regulatory framework. The key objectives of the project were to integrate a stochastic SFA-style noise term to the nonparametric, axiomatic DEA-style cost frontier, and to take the heterogeneity of firms and their operating environments better into account. To achieve these objectives, a new method called stochastic nonparametric envelopment of data (StoNED) was examined. Based on the insights and experiences gained in the empirical analysis using the real data of the regulated networks, the Finnish regulator adopted the StoNED method in use from 2012 onwards.

  2. Measuring energy performance with sectoral heterogeneity: A non-parametric frontier approach

    International Nuclear Information System (INIS)

    Wang, H.; Ang, B.W.; Wang, Q.W.; Zhou, P.

    2017-01-01

    Evaluating economy-wide energy performance is an integral part of assessing the effectiveness of a country's energy efficiency policy. Non-parametric frontier approach has been widely used by researchers for such a purpose. This paper proposes an extended non-parametric frontier approach to studying economy-wide energy efficiency and productivity performances by accounting for sectoral heterogeneity. Relevant techniques in index number theory are incorporated to quantify the driving forces behind changes in the economy-wide energy productivity index. The proposed approach facilitates flexible modelling of different sectors' production processes, and helps to examine sectors' impact on the aggregate energy performance. A case study of China's economy-wide energy efficiency and productivity performances in its 11th five-year plan period (2006–2010) is presented. It is found that sectoral heterogeneities in terms of energy performance are significant in China. Meanwhile, China's economy-wide energy productivity increased slightly during the study period, mainly driven by the technical efficiency improvement. A number of other findings have also been reported. - Highlights: • We model economy-wide energy performance by considering sectoral heterogeneity. • The proposed approach can identify sectors' impact on the aggregate energy performance. • Obvious sectoral heterogeneities are identified in evaluating China's energy performance.

  3. European regional efficiency and geographical externalities: a spatial nonparametric frontier analysis

    Science.gov (United States)

    Ramajo, Julián; Cordero, José Manuel; Márquez, Miguel Ángel

    2017-10-01

    This paper analyses region-level technical efficiency in nine European countries over the 1995-2007 period. We propose the application of a nonparametric conditional frontier approach to account for the presence of heterogeneous conditions in the form of geographical externalities. Such environmental factors are beyond the control of regional authorities, but may affect the production function. Therefore, they need to be considered in the frontier estimation. Specifically, a spatial autoregressive term is included as an external conditioning factor in a robust order- m model. Thus we can test the hypothesis of non-separability (the external factor impacts both the input-output space and the distribution of efficiencies), demonstrating the existence of significant global interregional spillovers into the production process. Our findings show that geographical externalities affect both the frontier level and the probability of being more or less efficient. Specifically, the results support the fact that the spatial lag variable has an inverted U-shaped non-linear impact on the performance of regions. This finding can be interpreted as a differential effect of interregional spillovers depending on the size of the neighboring economies: positive externalities for small values, possibly related to agglomeration economies, and negative externalities for high values, indicating the possibility of production congestion. Additionally, evidence of the existence of a strong geographic pattern of European regional efficiency is reported and the levels of technical efficiency are acknowledged to have converged during the period under analysis.

  4. Frontier Analysis

    DEFF Research Database (Denmark)

    Assaf, A. George; Josiassen, Alexander

    2016-01-01

    This article presents a comprehensive review of frontier studies in the tourism literature. We discuss the main advantages and disadvantages of the various frontier approaches, in particular, the nonparametric and parametric frontier approaches. The study further differentiates between micro...

  5. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  6. Quality frontier of electricity distribution: Supply security, best practices, and underground cabling in Finland

    International Nuclear Information System (INIS)

    Saastamoinen, Antti; Kuosmanen, Timo

    2016-01-01

    Electricity distribution is a prime example of local monopoly. In most countries, the costs of electricity distribution operators are regulated by the government. However, the cost regulation may create adverse incentives to compromise the quality of service. To avoid this, cost regulation is often amended with quality incentives. This study applies theory and methods of productivity analysis to model the frontier of service quality. A semi-nonparametric estimation method is developed, which does not assume any particular functional form for the quality frontier, but can accommodate stochastic noise and heteroscedasticity. The empirical part of our paper examines how underground cabling and location affect the interruption costs. As expected, higher proportion of underground cabling decreases the level of interruption costs. The effects of cabling and location on the variance of performance are also considered. Especially the location is found to be a significant source of heteroscedasticity in the interruption costs. Finally, the proposed quality frontier benchmark is compared to the current practice of Finnish regulation system. The proposed quality frontier is found to provide more meaningful and stable basis for setting quality targets than the average practice benchmarks currently in use. - Highlights: • Cost regulation may create adverse incentives to lower the service quality. • We model the frontier of service quality using a semi-nonparametric approach. • Our nonparametric frontier accounts for stochastic noise and heteroscedasticity. • We estimate how underground cabling and location affect interruption costs. • We compare our quality frontier with the current quality regulation in Finland.

  7. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  8. Nonparametric correlation models for portfolio allocation

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Casas, Isabel

    2013-01-01

    This article proposes time-varying nonparametric and semiparametric estimators of the conditional cross-correlation matrix in the context of portfolio allocation. Simulations results show that the nonparametric and semiparametric models are best in DGPs with substantial variability or structural ...... currencies. Results show the nonparametric model generally dominates the others when evaluating in-sample. However, the semiparametric model is best for out-of-sample analysis....

  9. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...

  10. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  11. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  12. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  13. Measuring economy-wide energy efficiency performance: A parametric frontier approach

    International Nuclear Information System (INIS)

    Zhou, P.; Ang, B.W.; Zhou, D.Q.

    2012-01-01

    This paper proposes a parametric frontier approach to estimating economy-wide energy efficiency performance from a production efficiency point of view. It uses the Shephard energy distance function to define an energy efficiency index and adopts the stochastic frontier analysis technique to estimate the index. A case study of measuring the economy-wide energy efficiency performance of a sample of OECD countries using the proposed approach is presented. It is found that the proposed parametric frontier approach has higher discriminating power in energy efficiency performance measurement compared to its nonparametric frontier counterparts.

  14. A ¤nonparametric dynamic additive regression model for longitudinal data

    DEFF Research Database (Denmark)

    Martinussen, T.; Scheike, T. H.

    2000-01-01

    dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models......dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models...

  15. Parametric and Non-Parametric System Modelling

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg

    1999-01-01

    the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...... considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...

  16. A Structural Labor Supply Model with Nonparametric Preferences

    NARCIS (Netherlands)

    van Soest, A.H.O.; Das, J.W.M.; Gong, X.

    2000-01-01

    Nonparametric techniques are usually seen as a statistic device for data description and exploration, and not as a tool for estimating models with a richer economic structure, which are often required for policy analysis.This paper presents an example where nonparametric flexibility can be attained

  17. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  18. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  19. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  20. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  1. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  2. Quantal Response: Nonparametric Modeling

    Science.gov (United States)

    2017-01-01

    capture the behavior of observed phenomena. Higher-order polynomial and finite-dimensional spline basis models allow for more complicated responses as the...flexibility as these are nonparametric (not constrained to any particular functional form). These should be useful in identifying nonstandard behavior via... deviance ∆ = −2 log(Lreduced/Lfull) is defined in terms of the likelihood function L. For normal error, Lfull = 1, and based on Eq. A-2, we have log

  3. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  4. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  5. Nonparametric Bayes Modeling of Multivariate Categorical Data.

    Science.gov (United States)

    Dunson, David B; Xing, Chuanhua

    2012-01-01

    Modeling of multivariate unordered categorical (nominal) data is a challenging problem, particularly in high dimensions and cases in which one wishes to avoid strong assumptions about the dependence structure. Commonly used approaches rely on the incorporation of latent Gaussian random variables or parametric latent class models. The goal of this article is to develop a nonparametric Bayes approach, which defines a prior with full support on the space of distributions for multiple unordered categorical variables. This support condition ensures that we are not restricting the dependence structure a priori. We show this can be accomplished through a Dirichlet process mixture of product multinomial distributions, which is also a convenient form for posterior computation. Methods for nonparametric testing of violations of independence are proposed, and the methods are applied to model positional dependence within transcription factor binding motifs.

  6. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Maity, Arnab

    2011-01-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work

  7. Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.

    Science.gov (United States)

    Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui

    2006-01-01

    This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.

  8. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  9. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  10. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei

    2011-07-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.

  11. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  12. Bayesian Non-Parametric Mixtures of GARCH(1,1 Models

    Directory of Open Access Journals (Sweden)

    John W. Lau

    2012-01-01

    Full Text Available Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

  13. Nonparametric combinatorial sequence models.

    Science.gov (United States)

    Wauthier, Fabian L; Jordan, Michael I; Jojic, Nebojsa

    2011-11-01

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This article presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three biological sequence families which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution over sequence representations induced by the prior. By integrating out the posterior, our method compares favorably to leading binding predictors.

  14. The nonparametric bootstrap for the current status model

    NARCIS (Netherlands)

    Groeneboom, P.; Hendrickx, K.

    2017-01-01

    It has been proved that direct bootstrapping of the nonparametric maximum likelihood estimator (MLE) of the distribution function in the current status model leads to inconsistent confidence intervals. We show that bootstrapping of functionals of the MLE can however be used to produce valid

  15. Nonparametric modeling of dynamic functional connectivity in fmri data

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer H.; Røge, Rasmus

    2015-01-01

    dynamic changes. The existing approaches modeling dynamic connectivity have primarily been based on time-windowing the data and k-means clustering. We propose a nonparametric generative model for dynamic FC in fMRI that does not rely on specifying window lengths and number of dynamic states. Rooted...

  16. Nonparametric NAR-ARCH Modelling of Stock Prices by the Kernel Methodology

    Directory of Open Access Journals (Sweden)

    Mohamed Chikhi

    2018-02-01

    Full Text Available This paper analyses cyclical behaviour of Orange stock price listed in French stock exchange over 01/03/2000 to 02/02/2017 by testing the nonlinearities through a class of conditional heteroscedastic nonparametric models. The linearity and Gaussianity assumptions are rejected for Orange Stock returns and informational shocks have transitory effects on returns and volatility. The forecasting results show that Orange stock prices are short-term predictable and nonparametric NAR-ARCH model has better performance over parametric MA-APARCH model for short horizons. Plus, the estimates of this model are also better comparing to the predictions of the random walk model. This finding provides evidence for weak form of inefficiency in Paris stock market with limited rationality, thus it emerges arbitrage opportunities.

  17. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  18. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  19. Promotion time cure rate model with nonparametric form of covariate effects.

    Science.gov (United States)

    Chen, Tianlei; Du, Pang

    2018-05-10

    Survival data with a cured portion are commonly seen in clinical trials. Motivated from a biological interpretation of cancer metastasis, promotion time cure model is a popular alternative to the mixture cure rate model for analyzing such data. The existing promotion cure models all assume a restrictive parametric form of covariate effects, which can be incorrectly specified especially at the exploratory stage. In this paper, we propose a nonparametric approach to modeling the covariate effects under the framework of promotion time cure model. The covariate effect function is estimated by smoothing splines via the optimization of a penalized profile likelihood. Point-wise interval estimates are also derived from the Bayesian interpretation of the penalized profile likelihood. Asymptotic convergence rates are established for the proposed estimates. Simulations show excellent performance of the proposed nonparametric method, which is then applied to a melanoma study. Copyright © 2018 John Wiley & Sons, Ltd.

  20. NONPARAMETRIC FIXED EFFECT PANEL DATA MODELS: RELATIONSHIP BETWEEN AIR POLLUTION AND INCOME FOR TURKEY

    Directory of Open Access Journals (Sweden)

    Rabia Ece OMAY

    2013-06-01

    Full Text Available In this study, relationship between gross domestic product (GDP per capita and sulfur dioxide (SO2 and particulate matter (PM10 per capita is modeled for Turkey. Nonparametric fixed effect panel data analysis is used for the modeling. The panel data covers 12 territories, in first level of Nomenclature of Territorial Units for Statistics (NUTS, for period of 1990-2001. Modeling of the relationship between GDP and SO2 and PM10 for Turkey, the non-parametric models have given good results.

  1. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  2. Impossible Frontiers

    OpenAIRE

    Brennan, Thomas J.; Lo, Andrew W.

    2009-01-01

    A key result of the Capital Asset Pricing Model (CAPM) is that the market portfolio---the portfolio of all assets in which each asset's weight is proportional to its total market capitalization---lies on the mean-variance efficient frontier, the set of portfolios having mean-variance characteristics that cannot be improved upon. Therefore, the CAPM cannot be consistent with efficient frontiers for which every frontier portfolio has at least one negative weight or short position. We call such ...

  3. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  4. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  5. A non-parametric hierarchical model to discover behavior dynamics from tracks

    NARCIS (Netherlands)

    Kooij, J.F.P.; Englebienne, G.; Gavrila, D.M.

    2012-01-01

    We present a novel non-parametric Bayesian model to jointly discover the dynamics of low-level actions and high-level behaviors of tracked people in open environments. Our model represents behaviors as Markov chains of actions which capture high-level temporal dynamics. Actions may be shared by

  6. Nonparametric tests for censored data

    CERN Document Server

    Bagdonavicus, Vilijandas; Nikulin, Mikhail

    2013-01-01

    This book concerns testing hypotheses in non-parametric models. Generalizations of many non-parametric tests to the case of censored and truncated data are considered. Most of the test results are proved and real applications are illustrated using examples. Theories and exercises are provided. The incorrect use of many tests applying most statistical software is highlighted and discussed.

  7. Nonparametric Bayesian models through probit stick-breaking processes.

    Science.gov (United States)

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  8. A contingency table approach to nonparametric testing

    CERN Document Server

    Rayner, JCW

    2000-01-01

    Most texts on nonparametric techniques concentrate on location and linear-linear (correlation) tests, with less emphasis on dispersion effects and linear-quadratic tests. Tests for higher moment effects are virtually ignored. Using a fresh approach, A Contingency Table Approach to Nonparametric Testing unifies and extends the popular, standard tests by linking them to tests based on models for data that can be presented in contingency tables.This approach unifies popular nonparametric statistical inference and makes the traditional, most commonly performed nonparametric analyses much more comp

  9. A comparative study of non-parametric models for identification of ...

    African Journals Online (AJOL)

    However, the frequency response method using random binary signals was good for unpredicted white noise characteristics and considered the best method for non-parametric system identifica-tion. The autoregressive external input (ARX) model was very useful for system identification, but on applicati-on, few input ...

  10. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  11. Scalable Bayesian nonparametric regression via a Plackett-Luce model for conditional ranks

    Science.gov (United States)

    Gray-Davies, Tristan; Holmes, Chris C.; Caron, François

    2018-01-01

    We present a novel Bayesian nonparametric regression model for covariates X and continuous response variable Y ∈ ℝ. The model is parametrized in terms of marginal distributions for Y and X and a regression function which tunes the stochastic ordering of the conditional distributions F (y|x). By adopting an approximate composite likelihood approach, we show that the resulting posterior inference can be decoupled for the separate components of the model. This procedure can scale to very large datasets and allows for the use of standard, existing, software from Bayesian nonparametric density estimation and Plackett-Luce ranking estimation to be applied. As an illustration, we show an application of our approach to a US Census dataset, with over 1,300,000 data points and more than 100 covariates. PMID:29623150

  12. Simple nonparametric checks for model data fit in CAT

    NARCIS (Netherlands)

    Meijer, R.R.

    2005-01-01

    In this paper, the usefulness of several nonparametric checks is discussed in a computerized adaptive testing (CAT) context. Although there is no tradition of nonparametric scalability in CAT, it can be argued that scalability checks can be useful to investigate, for example, the quality of item

  13. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    Science.gov (United States)

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  14. Using multinomial and imprecise probability for non-parametric modelling of rainfall in Manizales (Colombia

    Directory of Open Access Journals (Sweden)

    Ibsen Chivatá Cárdenas

    2008-05-01

    Full Text Available This article presents a rainfall model constructed by applying non-parametric modelling and imprecise probabilities; these tools were used because there was not enough homogeneous information in the study area. The area’s hydro-logical information regarding rainfall was scarce and existing hydrological time series were not uniform. A distributed extended rainfall model was constructed from so-called probability boxes (p-boxes, multinomial probability distribu-tion and confidence intervals (a friendly algorithm was constructed for non-parametric modelling by combining the last two tools. This model confirmed the high level of uncertainty involved in local rainfall modelling. Uncertainty en-compassed the whole range (domain of probability values thereby showing the severe limitations on information, leading to the conclusion that a detailed estimation of probability would lead to significant error. Nevertheless, rele-vant information was extracted; it was estimated that maximum daily rainfall threshold (70 mm would be surpassed at least once every three years and the magnitude of uncertainty affecting hydrological parameter estimation. This paper’s conclusions may be of interest to non-parametric modellers and decisions-makers as such modelling and imprecise probability represents an alternative for hydrological variable assessment and maybe an obligatory proce-dure in the future. Its potential lies in treating scarce information and represents a robust modelling strategy for non-seasonal stochastic modelling conditions

  15. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  16. Nonparametric functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yang, Jie; Wu, Rongling; Casella, George

    2009-03-01

    Functional mapping is a useful tool for mapping quantitative trait loci (QTL) that control dynamic traits. It incorporates mathematical aspects of biological processes into the mixture model-based likelihood setting for QTL mapping, thus increasing the power of QTL detection and the precision of parameter estimation. However, in many situations there is no obvious functional form and, in such cases, this strategy will not be optimal. Here we propose to use nonparametric function estimation, typically implemented with B-splines, to estimate the underlying functional form of phenotypic trajectories, and then construct a nonparametric test to find evidence of existing QTL. Using the representation of a nonparametric regression as a mixed model, the final test statistic is a likelihood ratio test. We consider two types of genetic maps: dense maps and general maps, and the power of nonparametric functional mapping is investigated through simulation studies and demonstrated by examples.

  17. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  18. Neoclassical versus Frontier Production Models ? Testing for the Skewness of Regression Residuals

    DEFF Research Database (Denmark)

    Kuosmanen, T; Fosgerau, Mogens

    2009-01-01

    The empirical literature on production and cost functions is divided into two strands. The neoclassical approach concentrates on model parameters, while the frontier approach decomposes the disturbance term to a symmetric noise term and a positively skewed inefficiency term. We propose a theoreti......The empirical literature on production and cost functions is divided into two strands. The neoclassical approach concentrates on model parameters, while the frontier approach decomposes the disturbance term to a symmetric noise term and a positively skewed inefficiency term. We propose...... a theoretical justification for the skewness of the inefficiency term, arguing that this skewness is the key testable hypothesis of the frontier approach. We propose to test the regression residuals for skewness in order to distinguish the two competing approaches. Our test builds directly upon the asymmetry...

  19. Frontier models for evaluating environmental efficiency: an overview

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Wall, A.

    2014-01-01

    Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with

  20. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  1. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  2. A nonparametric spatial scan statistic for continuous data.

    Science.gov (United States)

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  3. Prediction of A CRS Frontier Function and A Transformation Function for A CCR DEA Using EMBEDED PCA

    Directory of Open Access Journals (Sweden)

    Subhadip Sarkar

    2013-08-01

    Full Text Available Data Envelopment Analysis is a nonparametric tool for measuring the performance of a number of homogenous Decision Making Units. In this paper, Principal Component Analysis is used as an alternative tool to estimate the frontier in a Data Envelopment Analysis under the assumption of Constant Return to Scale. Apart from this, in the context of a multiple inputs and single output, a transformation function, is developed here using the Most Productive Scale Size condition stated by Starrett. This function complies with all postulates of a frontier function and is very similar to the formula given by Aigner and Chu. Moreover, it is capable of defining the threshold value for any resource.

  4. Biomembrane Frontiers Nanostructures, Models, and the Design of Life

    CERN Document Server

    Faller, Roland; Risbud, Subhash H; Jue, Thomas

    2009-01-01

    HANDBOOK OF MODERN BIOPHYSICS Series Editor Thomas Jue, PhD Handbook of Modern Biophysics brings current biophysics topics into focus, so that biology, medical, engineering, mathematics, and physical-science students or researchers can learn fundamental concepts and the application of new techniques in addressing biomedical challenges. Chapters explicate the conceptual framework of the physics formalism and illustrate the biomedical applications. With the addition of problem sets, guides to further study, and references, the interested reader can continue to explore independently the ideas presented. Volume II: Biomembrane Frontiers: Nanostructures, Models, and the Design of Life Editors: Roland Faller, PhD, Thomas Jue, PhD, Marjorie L. Longo, PhD, and Subhash H. Risbud, PhD In Biomembrane Frontiers: Nanostructures, Models, and the Design of Life, prominent researchers have established a foundation for the study of biophysics related to the following topics: Perspectives: Complexes in Liquids, 1900–2008 Mol...

  5. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    Directory of Open Access Journals (Sweden)

    Zhanchao Li

    2013-01-01

    Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.

  6. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  7. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  8. Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors

    Directory of Open Access Journals (Sweden)

    Xibin Zhang

    2016-04-01

    Full Text Available This paper develops a sampling algorithm for bandwidth estimation in a nonparametric regression model with continuous and discrete regressors under an unknown error density. The error density is approximated by the kernel density estimator of the unobserved errors, while the regression function is estimated using the Nadaraya-Watson estimator admitting continuous and discrete regressors. We derive an approximate likelihood and posterior for bandwidth parameters, followed by a sampling algorithm. Simulation results show that the proposed approach typically leads to better accuracy of the resulting estimates than cross-validation, particularly for smaller sample sizes. This bandwidth estimation approach is applied to nonparametric regression model of the Australian All Ordinaries returns and the kernel density estimation of gross domestic product (GDP growth rates among the organisation for economic co-operation and development (OECD and non-OECD countries.

  9. Exploring gravitational lensing model variations in the Frontier Fields galaxy clusters

    Science.gov (United States)

    Harris James, Nicholas John; Raney, Catie; Brennan, Sean; Keeton, Charles

    2018-01-01

    Multiple groups have been working on modeling the mass distributions of the six lensing galaxy clusters in the Hubble Space Telescope Frontier Fields data set. The magnification maps produced from these mass models will be important for the future study of the lensed background galaxies, but there exists significant variation in the different groups’ models and magnification maps. We explore the use of two-dimensional histograms as a tool for visualizing these magnification map variations. Using a number of simple, one- or two-halo singular isothermal sphere models, we explore the features that are produced in 2D histogram model comparisons when parameters such as halo mass, ellipticity, and location are allowed to vary. Our analysis demonstrates the potential of 2D histograms as a means of observing the full range of differences between the Frontier Fields groups’ models.This work has been supported by funding from National Science Foundation grants PHY-1560077 and AST-1211385, and from the Space Telescope Science Institute.

  10. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  11. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  12. Robustifying Bayesian nonparametric mixtures for count data.

    Science.gov (United States)

    Canale, Antonio; Prünster, Igor

    2017-03-01

    Our motivating application stems from surveys of natural populations and is characterized by large spatial heterogeneity in the counts, which makes parametric approaches to modeling local animal abundance too restrictive. We adopt a Bayesian nonparametric approach based on mixture models and innovate with respect to popular Dirichlet process mixture of Poisson kernels by increasing the model flexibility at the level both of the kernel and the nonparametric mixing measure. This allows to derive accurate and robust estimates of the distribution of local animal abundance and of the corresponding clusters. The application and a simulation study for different scenarios yield also some general methodological implications. Adding flexibility solely at the level of the mixing measure does not improve inferences, since its impact is severely limited by the rigidity of the Poisson kernel with considerable consequences in terms of bias. However, once a kernel more flexible than the Poisson is chosen, inferences can be robustified by choosing a prior more general than the Dirichlet process. Therefore, to improve the performance of Bayesian nonparametric mixtures for count data one has to enrich the model simultaneously at both levels, the kernel and the mixing measure. © 2016, The International Biometric Society.

  13. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    Science.gov (United States)

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  14. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  15. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    Science.gov (United States)

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  16. Estimating the shadow prices of SO2 and NOx for U.S. coal power plants: A convex nonparametric least squares approach

    International Nuclear Information System (INIS)

    Mekaroonreung, Maethee; Johnson, Andrew L.

    2012-01-01

    Weak disposability between outputs and pollutants, defined as a simultaneous proportional reduction of both outputs and pollutants, assumes that pollutants are byproducts of the output generation process and that a firm can “freely dispose” of both by scaling down production levels, leaving some inputs idle. Based on the production axioms of monotonicity, convexity and weak disposability, we formulate a convex nonparametric least squares (CNLS) quadratic optimization problem to estimate a frontier production function assuming either a deterministic disturbance term consisting only of inefficiency, or a composite disturbance term composed of both inefficiency and noise. The suggested methodology extends the stochastic semi-nonparametric envelopment of data (StoNED) described in Kuosmanen and Kortelainen (2011). Applying the method to estimate the shadow prices of SO 2 and NO x generated by U.S. coal power plants, we conclude that the weak disposability StoNED method provides more consistent estimates of market prices. - Highlights: ► Develops methodology to estimate shadow prices for SO 2 and NO x in the U.S. coal power plants. ► Extends CNLS and StoNED methods to include the weak disposability assumption. ► Estimates the range of SO 2 and NO x shadow prices as 201–343 $/ton and 409–1352 $/ton. ► StoNED method provides more accurate estimates of shadow prices than deterministic frontier.

  17. A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING

    OpenAIRE

    Temel, Tugrul T.

    2001-01-01

    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.

  18. Bayesian nonparametric modeling for comparison of single-neuron firing intensities.

    Science.gov (United States)

    Kottas, Athanasios; Behseta, Sam

    2010-03-01

    We propose a fully inferential model-based approach to the problem of comparing the firing patterns of a neuron recorded under two distinct experimental conditions. The methodology is based on nonhomogeneous Poisson process models for the firing times of each condition with flexible nonparametric mixture prior models for the corresponding intensity functions. We demonstrate posterior inferences from a global analysis, which may be used to compare the two conditions over the entire experimental time window, as well as from a pointwise analysis at selected time points to detect local deviations of firing patterns from one condition to another. We apply our method on two neurons recorded from the primary motor cortex area of a monkey's brain while performing a sequence of reaching tasks.

  19. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  20. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    Science.gov (United States)

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  1. Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model

    Directory of Open Access Journals (Sweden)

    Isaac Mugume

    2016-01-01

    Full Text Available Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE, the mean absolute error (MAE, mean error (ME, skewness, and the bias easy estimate (BES and nonparametric (the sign test, STM methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.

  2. Contruction of a smoothed DEA frontier

    Directory of Open Access Journals (Sweden)

    Mello João Carlos Correia Baptista Soares de

    2002-01-01

    Full Text Available It is known that the DEA multipliers model does not allow a unique solution for the weights. This is due to the absence of unique derivatives in the extreme-efficient points, which is a consequence of the piecewise linear nature of the frontier. In this paper we propose a method to solve this problem, consisting of changing the original DEA frontier for a new one, smooth (with continuous derivatives at every point and closest to the original frontier. We present the theoretical development for the general case, exemplified with the particular case of the BCC model with one input and one output. The 3-dimensional problem is briefly discussed. Some uses of the model are summarised, and one of them, a new Cross-Evaluation model, is presented.

  3. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  4. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of)

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.

  5. Essays on nonparametric econometrics of stochastic volatility

    NARCIS (Netherlands)

    Zu, Y.

    2012-01-01

    Volatility is a concept that describes the variation of financial returns. Measuring and modelling volatility dynamics is an important aspect of financial econometrics. This thesis is concerned with nonparametric approaches to volatility measurement and volatility model validation.

  6. THE IMPACT OF COMPETITIVENESS ON TRADE EFFICIENCY: THE ASIAN EXPERIENCE BY USING THE STOCHASTIC FRONTIER GRAVITY MODEL

    Directory of Open Access Journals (Sweden)

    Memduh Alper Demir

    2017-12-01

    Full Text Available The purpose of this study is to examine the bilateral machinery and transport equipment trade efficiency of selected fourteen Asian countries by applying stochastic frontier gravity model. These selected countries have the top machinery and transport equipment trade (both export and import volumes in Asia. The model we use includes variables such as income, market size of trading partners, distance, common culture, common border, common language and global economic crisis similar to earlier studies using the stochastic frontier gravity models. Our work, however, includes an extra variable called normalized revealed comparative advantage (NRCA index additionally. The NRCA index is comparable across commodity, country and time. Thus, the NRCA index is calculated and then included in our stochastic frontier gravity model to see the impact of competitiveness (here measured by the NRCA index on the efficiency of trade.

  7. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  8. 2nd Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Manteiga, Wenceslao; Romo, Juan

    2016-01-01

    This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cádiz (Spain) between June 11–16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers...

  9. Evaluation of parametric and nonparametric models to predict water flow; Avaliacao entre modelos parametricos e nao parametricos para previsao de vazoes afluentes

    Energy Technology Data Exchange (ETDEWEB)

    Marques, T.C.; Cruz Junior, G.; Vinhal, C. [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Escola de Engenharia Eletrica e de Computacao], Emails: thyago@eeec.ufg.br, gcruz@eeec.ufg.br, vinhal@eeec.ufg.br

    2009-07-01

    The goal of this paper is to present a methodology to carry out the seasonal stream flow forecasting using database of average monthly inflows of one Brazilian hydroelectric plant located at Grande, Tocantins, Paranaiba, Sao Francisco and Iguacu river's. The model is based on the Adaptive Network Based Fuzzy Inference System (ANFIS), the non-parametric model. The performance of this model was compared with a periodic autoregressive model, the parametric model. The results show that the forecasting errors of the non-parametric model considered are significantly lower than the parametric model. (author)

  10. Robust variable selection method for nonparametric differential equation models with application to nonlinear dynamic gene regulatory network analysis.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.

  11. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  12. A local non-parametric model for trade sign inference

    Science.gov (United States)

    Blazejewski, Adam; Coggins, Richard

    2005-03-01

    We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.

  13. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    Science.gov (United States)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  14. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  15. A Frontier Model for Landscape Ecology: The Tapir in Honduras

    OpenAIRE

    Kevin Flesher; Eduardo Ley

    1995-01-01

    We borrow a frontier specification from the econometrics literature to make inferences about the tolerance of the tapir to human settlements. We estimate the width of an invisible band surrounding human settlements which would act as a frontier or exclusion zone to the tapir to be around 290 meters.

  16. Generative Temporal Modelling of Neuroimaging - Decomposition and Nonparametric Testing

    DEFF Research Database (Denmark)

    Hald, Ditte Høvenhoff

    The goal of this thesis is to explore two improvements for functional magnetic resonance imaging (fMRI) analysis; namely our proposed decomposition method and an extension to the non-parametric testing framework. Analysis of fMRI allows researchers to investigate the functional processes...... of the brain, and provides insight into neuronal coupling during mental processes or tasks. The decomposition method is a Gaussian process-based independent components analysis (GPICA), which incorporates a temporal dependency in the sources. A hierarchical model specification is used, featuring both...... instantaneous and convolutive mixing, and the inferred temporal patterns. Spatial maps are seen to capture smooth and localized stimuli-related components, and often identifiable noise components. The implementation is freely available as a GUI/SPM plugin, and we recommend using GPICA as an additional tool when...

  17. Parametric vs. Nonparametric Regression Modelling within Clinical Decision Support

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Zvárová, Jana

    2017-01-01

    Roč. 5, č. 1 (2017), s. 21-27 ISSN 1805-8698 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : decision support systems * decision rules * statistical analysis * nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability

  18. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  19. Screen Wars, Star Wars, and Sequels: Nonparametric Reanalysis of Movie Profitability

    OpenAIRE

    W. D. Walls

    2012-01-01

    In this paper we use nonparametric statistical tools to quantify motion-picture profit. We quantify the unconditional distribution of profit, the distribution of profit conditional on stars and sequels, and we also model the conditional expectation of movie profits using a non- parametric data-driven regression model. The flexibility of the non-parametric approach accommodates the full range of possible relationships among the variables without prior specification of a functional form, thereb...

  20. Reference models and incentive regulation of electricity distribution networks: An evaluation of Sweden's Network Performance Assessment Model (NPAM)

    International Nuclear Information System (INIS)

    Jamasb, Tooraj; Pollitt, Michael

    2008-01-01

    Electricity sector reforms across the world have led to a search for innovative approaches to regulation that promote efficiency in the natural monopoly distribution networks and reduce their service charges. To this aim, a number of countries have adopted incentive regulation models based on efficiency benchmarking. While most regulators have used parametric and non-parametric frontier-based methods of benchmarking some have adopted engineering-designed 'reference firm' or 'norm' models. This paper examines the incentive properties and related aspects of the reference firm model-NPAM-as used in Sweden and compares this with frontier-based benchmarking methods. We identify a number of important differences between the two approaches that are not readily apparent and discuss their ramifications for the regulatory objectives and process. We conclude that, on balance, the reference models are less appropriate as benchmarks than real firms. Also, the implementation framework based on annual ex-post reviews exacerbates the regulatory problems mainly by increasing uncertainty and reducing the incentive for innovation

  1. Nonparametric statistics for social and behavioral sciences

    CERN Document Server

    Kraska-MIller, M

    2013-01-01

    Introduction to Research in Social and Behavioral SciencesBasic Principles of ResearchPlanning for ResearchTypes of Research Designs Sampling ProceduresValidity and Reliability of Measurement InstrumentsSteps of the Research Process Introduction to Nonparametric StatisticsData AnalysisOverview of Nonparametric Statistics and Parametric Statistics Overview of Parametric Statistics Overview of Nonparametric StatisticsImportance of Nonparametric MethodsMeasurement InstrumentsAnalysis of Data to Determine Association and Agreement Pearson Chi-Square Test of Association and IndependenceContingency

  2. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  3. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  4. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  5. Fundamental Symmetries of the Early Universe and the Precision Frontier

    International Nuclear Information System (INIS)

    Ramsey-Musolf, Michael J.

    2009-01-01

    The search for the next Standard Model of fundamental interactions is being carried out at two frontiers: the high energy frontier involving the Tevatron and Large Hadron Collider, and the high precision frontier where the focus is largely on low energy experiments. I discuss the unique and powerful window on new physics provided by the precision frontier and its complementarity to the information we hope to gain from present and future colliders.

  6. Frontier constellations

    DEFF Research Database (Denmark)

    Eilenberg, Michael

    2014-01-01

    expansion, population resettlement and securitization, and the confluence of these dynamic processes creates special frontier constellations. Through the case of the Indonesian-Malaysian borderlands, I explore how processes of frontier colonization through agricultural expansion have been a recurrent...

  7. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  8. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  9. Frontier spaces

    DEFF Research Database (Denmark)

    Rasmussen, Mattias Borg; Lund, Christian

    2018-01-01

    The global expansion of markets produces frontiers of contestation over the definition and control of resources. In a frontier context, new patterns of resource exploration, extraction, and commodification create new territories. A recently published collection (Rasmussen and Lund 2018) explores...

  10. Smooth semi-nonparametric (SNP) estimation of the cumulative incidence function.

    Science.gov (United States)

    Duc, Anh Nguyen; Wolbers, Marcel

    2017-08-15

    This paper presents a novel approach to estimation of the cumulative incidence function in the presence of competing risks. The underlying statistical model is specified via a mixture factorization of the joint distribution of the event type and the time to the event. The time to event distributions conditional on the event type are modeled using smooth semi-nonparametric densities. One strength of this approach is that it can handle arbitrary censoring and truncation while relying on mild parametric assumptions. A stepwise forward algorithm for model estimation and adaptive selection of smooth semi-nonparametric polynomial degrees is presented, implemented in the statistical software R, evaluated in a sequence of simulation studies, and applied to data from a clinical trial in cryptococcal meningitis. The simulations demonstrate that the proposed method frequently outperforms both parametric and nonparametric alternatives. They also support the use of 'ad hoc' asymptotic inference to derive confidence intervals. An extension to regression modeling is also presented, and its potential and challenges are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. Nonparametric instrumental regression with non-convex constraints

    International Nuclear Information System (INIS)

    Grasmair, M; Scherzer, O; Vanhems, A

    2013-01-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)

  12. Nonparametric instrumental regression with non-convex constraints

    Science.gov (United States)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  13. Non-parametric Bayesian graph models reveal community structure in resting state fMRI

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther; Madsen, Kristoffer H.; Siebner, Hartwig Roman

    2014-01-01

    Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian...... models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite...... between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background...

  14. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.

  15. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    Science.gov (United States)

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. A nonparametric empirical Bayes framework for large-scale multiple testing.

    Science.gov (United States)

    Martin, Ryan; Tokdar, Surya T

    2012-07-01

    We propose a flexible and identifiable version of the 2-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the nonnull cases. We use a computationally efficient predictive recursion (PR) marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the nonnull density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.

  17. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  18. A Bayesian Nonparametric Approach to Factor Analysis

    DEFF Research Database (Denmark)

    Piatek, Rémi; Papaspiliopoulos, Omiros

    2018-01-01

    This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...

  19. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  20. On the critical frontiers of Potts ferromagnets

    International Nuclear Information System (INIS)

    Magalhaes, A.C.N. de; Tsallis, C.

    1981-01-01

    A conjecture concerning the critical frontiers of q- state Potts ferromagnets on d- dimensional lattices (d > 1) which generalize a recent one stated for planar lattices is formulated. The present conjecture is verified within satisfactory accuracy (exactly in some cases) for all the lattices or arrays whose critical points are known. Its use leads to the prediction of: a) a considerable amount of new approximate critical points (26 on non-planar regular lattices, some others on Husimi trees and cacti); b) approximate critical frontiers for some 3- dimensional lattices; c) the possibly asymptotically exact critical point on regular lattices in the limit d→infinite for all q>=1; d) the possibly exact critical frontier for the pure Potts model on fully anisotropic Bethe lattices; e) the possibly exact critical frontier for the general quenched random-bond Potts ferromagnet (any P(J)) on isotropic Bethe lattices. (Author) [pt

  1. Nonparametric modeling of US interest rate term structure dynamics and implications on the prices of derivative securities

    NARCIS (Netherlands)

    Jiang, GJ

    1998-01-01

    This paper develops a nonparametric model of interest rate term structure dynamics based an a spot rate process that permits only positive interest rates and a market price of interest rate risk that precludes arbitrage opportunities. Both the spot rate process and the market price of interest rate

  2. Decision support using nonparametric statistics

    CERN Document Server

    Beatty, Warren

    2018-01-01

    This concise volume covers nonparametric statistics topics that most are most likely to be seen and used from a practical decision support perspective. While many degree programs require a course in parametric statistics, these methods are often inadequate for real-world decision making in business environments. Much of the data collected today by business executives (for example, customer satisfaction opinions) requires nonparametric statistics for valid analysis, and this book provides the reader with a set of tools that can be used to validly analyze all data, regardless of type. Through numerous examples and exercises, this book explains why nonparametric statistics will lead to better decisions and how they are used to reach a decision, with a wide array of business applications. Online resources include exercise data, spreadsheets, and solutions.

  3. Bayesian nonparametric system reliability using sets of priors

    NARCIS (Netherlands)

    Walter, G.M.; Aslett, L.J.M.; Coolen, F.P.A.

    2016-01-01

    An imprecise Bayesian nonparametric approach to system reliability with multiple types of components is developed. This allows modelling partial or imperfect prior knowledge on component failure distributions in a flexible way through bounds on the functioning probability. Given component level test

  4. Nonparametric predictive inference in statistical process control

    NARCIS (Netherlands)

    Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.

    2004-01-01

    Statistical process control (SPC) is used to decide when to stop a process as confidence in the quality of the next item(s) is low. Information to specify a parametric model is not always available, and as SPC is of a predictive nature, we present a control chart developed using nonparametric

  5. Observed and unobserved heterogeneity in stochastic frontier models: An application to the electricity distribution industry

    International Nuclear Information System (INIS)

    Kopsakangas-Savolainen, Maria; Svento, Rauli

    2011-01-01

    In this study we combine different possibilities to model firm level heterogeneity in stochastic frontier analysis. We show that both observed and unobserved heterogeneities cause serious biases in inefficiency results. Modelling observed and unobserved heterogeneities treat individual firms in different ways and even though the expected mean inefficiency scores in both cases diminish the firm level efficiency rank orders turn out to be very different. The best fit with the data is obtained by modelling unobserved heterogeneity through randomizing frontier parameters and at the same time explicitly modelling the observed heterogeneity into the inefficiency distribution. These results are obtained by using data from Finnish electricity distribution utilities and the results are relevant in relation to electricity distribution pricing and regulation. -- Research Highlights: → We show that both observed and unobserved heterogeneities of firms cause biases in inefficiency results. → Different ways of accounting firm level heterogeneity end up with very different rank orders of firms. → The model which combines the characteristics of unobserved and observed heterogeneity fits the data best.

  6. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  7. Theory of nonparametric tests

    CERN Document Server

    Dickhaus, Thorsten

    2018-01-01

    This textbook provides a self-contained presentation of the main concepts and methods of nonparametric statistical testing, with a particular focus on the theoretical foundations of goodness-of-fit tests, rank tests, resampling tests, and projection tests. The substitution principle is employed as a unified approach to the nonparametric test problems discussed. In addition to mathematical theory, it also includes numerous examples and computer implementations. The book is intended for advanced undergraduate, graduate, and postdoc students as well as young researchers. Readers should be familiar with the basic concepts of mathematical statistics typically covered in introductory statistics courses.

  8. Does the high–tech industry consistently reduce CO{sub 2} emissions? Results from nonparametric additive regression model

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin [School of Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013 (China); Research Center of Applied Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013 (China); Lin, Boqiang, E-mail: bqlin@xmu.edu.cn [Collaborative Innovation Center for Energy Economics and Energy Policy, China Institute for Studies in Energy Policy, Xiamen University, Xiamen, Fujian 361005 (China)

    2017-03-15

    China is currently the world's largest carbon dioxide (CO{sub 2}) emitter. Moreover, total energy consumption and CO{sub 2} emissions in China will continue to increase due to the rapid growth of industrialization and urbanization. Therefore, vigorously developing the high–tech industry becomes an inevitable choice to reduce CO{sub 2} emissions at the moment or in the future. However, ignoring the existing nonlinear links between economic variables, most scholars use traditional linear models to explore the impact of the high–tech industry on CO{sub 2} emissions from an aggregate perspective. Few studies have focused on nonlinear relationships and regional differences in China. Based on panel data of 1998–2014, this study uses the nonparametric additive regression model to explore the nonlinear effect of the high–tech industry from a regional perspective. The estimated results show that the residual sum of squares (SSR) of the nonparametric additive regression model in the eastern, central and western regions are 0.693, 0.054 and 0.085 respectively, which are much less those that of the traditional linear regression model (3.158, 4.227 and 7.196). This verifies that the nonparametric additive regression model has a better fitting effect. Specifically, the high–tech industry produces an inverted “U–shaped” nonlinear impact on CO{sub 2} emissions in the eastern region, but a positive “U–shaped” nonlinear effect in the central and western regions. Therefore, the nonlinear impact of the high–tech industry on CO{sub 2} emissions in the three regions should be given adequate attention in developing effective abatement policies. - Highlights: • The nonlinear effect of the high–tech industry on CO{sub 2} emissions was investigated. • The high–tech industry yields an inverted “U–shaped” effect in the eastern region. • The high–tech industry has a positive “U–shaped” nonlinear effect in other regions. • The linear impact

  9. Uncertainty in decision models analyzing cost-effectiveness : The joint distribution of incremental costs and effectiveness evaluated with a nonparametric bootstrap method

    NARCIS (Netherlands)

    Hunink, Maria; Bult, J.R.; De Vries, J; Weinstein, MC

    1998-01-01

    Purpose. To illustrate the use of a nonparametric bootstrap method in the evaluation of uncertainty in decision models analyzing cost-effectiveness. Methods. The authors reevaluated a previously published cost-effectiveness analysis that used a Markov model comparing initial percutaneous

  10. Examples of the Application of Nonparametric Information Geometry to Statistical Physics

    Directory of Open Access Journals (Sweden)

    Giovanni Pistone

    2013-09-01

    Full Text Available We review a nonparametric version of Amari’s information geometry in which the set of positive probability densities on a given sample space is endowed with an atlas of charts to form a differentiable manifold modeled on Orlicz Banach spaces. This nonparametric setting is used to discuss the setting of typical problems in machine learning and statistical physics, such as black-box optimization, Kullback-Leibler divergence, Boltzmann-Gibbs entropy and the Boltzmann equation.

  11. Reconfiguring frontier spaces

    DEFF Research Database (Denmark)

    Rasmussen, Mattias Borg; Lund, Christian

    2018-01-01

    The expansion of capitalism produces contests over the definition and control of resources. On a global scale, new patterns of resource exploration, extraction, and commodification create new territories. This takes place within a dynamic of frontiers and territorialization. Frontier dynamics...

  12. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  13. Nonparametric statistics with applications to science and engineering

    CERN Document Server

    Kvam, Paul H

    2007-01-01

    A thorough and definitive book that fully addresses traditional and modern-day topics of nonparametric statistics This book presents a practical approach to nonparametric statistical analysis and provides comprehensive coverage of both established and newly developed methods. With the use of MATLAB, the authors present information on theorems and rank tests in an applied fashion, with an emphasis on modern methods in regression and curve fitting, bootstrap confidence intervals, splines, wavelets, empirical likelihood, and goodness-of-fit testing. Nonparametric Statistics with Applications to Science and Engineering begins with succinct coverage of basic results for order statistics, methods of categorical data analysis, nonparametric regression, and curve fitting methods. The authors then focus on nonparametric procedures that are becoming more relevant to engineering researchers and practitioners. The important fundamental materials needed to effectively learn and apply the discussed methods are also provide...

  14. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    Science.gov (United States)

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  15. Non-parametric Tuning of PID Controllers A Modified Relay-Feedback-Test Approach

    CERN Document Server

    Boiko, Igor

    2013-01-01

    The relay feedback test (RFT) has become a popular and efficient  tool used in process identification and automatic controller tuning. Non-parametric Tuning of PID Controllers couples new modifications of classical RFT with application-specific optimal tuning rules to form a non-parametric method of test-and-tuning. Test and tuning are coordinated through a set of common parameters so that a PID controller can obtain the desired gain or phase margins in a system exactly, even with unknown process dynamics. The concept of process-specific optimal tuning rules in the nonparametric setup, with corresponding tuning rules for flow, level pressure, and temperature control loops is presented in the text.   Common problems of tuning accuracy based on parametric and non-parametric approaches are addressed. In addition, the text treats the parametric approach to tuning based on the modified RFT approach and the exact model of oscillations in the system under test using the locus of a perturbedrelay system (LPRS) meth...

  16. Nonparametric Regression Estimation for Multivariate Null Recurrent Processes

    Directory of Open Access Journals (Sweden)

    Biqing Cai

    2015-04-01

    Full Text Available This paper discusses nonparametric kernel regression with the regressor being a \\(d\\-dimensional \\(\\beta\\-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \\(\\sqrt{n(Th^{d}}\\, where \\(n(T\\ is the number of regenerations for a \\(\\beta\\-null recurrent process and the limiting distribution (with proper normalization is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.

  17. Using nonparametrics to specify a model to measure the value of travel time

    DEFF Research Database (Denmark)

    Fosgerau, Mogens

    2007-01-01

    Using a range of nonparametric methods, the paper examines the specification of a model to evaluate the willingness-to-pay (WTP) for travel time changes from binomial choice data from a simple time-cost trading experiment. The analysis favours a model with random WTP as the only source...... of randomness over a model with fixed WTP which is linear in time and cost and has an additive random error term. Results further indicate that the distribution of log WTP can be described as a sum of a linear index fixing the location of the log WTP distribution and an independent random variable representing...... unobserved heterogeneity. This formulation is useful for parametric modelling. The index indicates that the WTP varies systematically with income and other individual characteristics. The WTP varies also with the time difference presented in the experiment which is in contradiction of standard utility theory....

  18. Weak Disposability in Nonparametric Production Analysis with Undesirable Outputs

    NARCIS (Netherlands)

    Kuosmanen, T.K.

    2005-01-01

    Environmental Economics and Natural Resources Group at Wageningen University in The Netherlands Weak disposability of outputs means that firms can abate harmful emissions by decreasing the activity level. Modeling weak disposability in nonparametric production analysis has caused some confusion.

  19. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  20. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  1. Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  2. Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    2003-01-01

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  3. Non-parametric estimation of the individual's utility map

    OpenAIRE

    Noguchi, Takao; Sanborn, Adam N.; Stewart, Neil

    2013-01-01

    Models of risky choice have attracted much attention in behavioural economics. Previous research has repeatedly demonstrated that individuals' choices are not well explained by expected utility theory, and a number of alternative models have been examined using carefully selected sets of choice alternatives. The model performance however, can depend on which choice alternatives are being tested. Here we develop a non-parametric method for estimating the utility map over the wide range of choi...

  4. A Study on Neutrosophic Frontier and Neutrosophic Semi-frontier in Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    P. Iswarya

    2017-02-01

    Full Text Available In this paper neutrosophic frontier and neutrosophic semi-frontier in neutrosophic topology are introduced and several of their properties, characterizations and examples are established.

  5. Karakteristik Kurva Efisien Frontier dalam Menentukan Portofolio Optimal

    Directory of Open Access Journals (Sweden)

    epha diana supandi

    2016-06-01

    Full Text Available Pada tulisan ini karakteristik kurva efisien frontier pada model portofolio Markowitz diteliti secara matematis. Portofolio optimal diperoleh dengan menggunakan metode Lagrange. Pada penelitian ini juga dikaji karakteristik portofolio optimal pada portofolio minimum variance, portofolio tangency dan portofolio mean-variance serta posisinya pada kurva efisien frontier. Lebih lanjut untuk memberikan gambaran yang lebih konkrit maka diberikan contoh numerik pada beberapa saham yang diperdagangkan di pasar modal Indonesia.

  6. Introduction to nonparametric statistics for the biological sciences using R

    CERN Document Server

    MacFarland, Thomas W

    2016-01-01

    This book contains a rich set of tools for nonparametric analyses, and the purpose of this supplemental text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses a...

  7. Efficient Provision of Employment Service Outputs: A Production Frontier Analysis.

    Science.gov (United States)

    Cavin, Edward S.; Stafford, Frank P.

    1985-01-01

    This article develops a production frontier model for the Employment Service and assesses the relative efficiency of the 51 State Employment Security Agencies in attaining program outcomes close to that frontier. This approach stands in contrast to such established practices as comparing programs to their own previous performance. (Author/CT)

  8. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  9. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  10. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  11. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    Science.gov (United States)

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  12. International comparisons of the technical efficiency of the hospital sector: panel data analysis of OECD countries using parametric and non-parametric approaches.

    Science.gov (United States)

    Varabyova, Yauheniya; Schreyögg, Jonas

    2013-09-01

    There is a growing interest in the cross-country comparisons of the performance of national health care systems. The present work provides a comparison of the technical efficiency of the hospital sector using unbalanced panel data from OECD countries over the period 2000-2009. The estimation of the technical efficiency of the hospital sector is performed using nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Internal and external validity of findings is assessed by estimating the Spearman rank correlations between the results obtained in different model specifications. The panel-data analyses using two-step DEA and one-stage SFA show that countries, which have higher health care expenditure per capita, tend to have a more technically efficient hospital sector. Whether the expenditure is financed through private or public sources is not related to the technical efficiency of the hospital sector. On the other hand, the hospital sector in countries with higher income inequality and longer average hospital length of stay is less technically efficient. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  13. Frontier Scientists use Modern Media

    Science.gov (United States)

    O'connell, E. A.

    2013-12-01

    Engaging Americans and the international community in the excitement and value of Alaskan Arctic discovery is the goal of Frontier Scientists. With a changing climate, resources of polar regions are being eyed by many nations. Frontier Scientists brings the stories of field scientists in the Far North to the public. With a website, an app, short videos, and social media channels; FS is a model for making connections between the public and field scientists. FS will demonstrate how academia, web content, online communities, evaluation and marketing are brought together in a 21st century multi-media platform, how scientists can maintain their integrity while engaging in outreach, and how new forms of media such as short videos can entertain as well as inspire.

  14. Trade Performance and Potential of the Philippines: An Application of Stochastic Frontier Gravity Model

    OpenAIRE

    Deluna, Roperto Jr

    2013-01-01

    This study was conducted to investigate the issue of what Philippine merchandise trade flows would be if countries operated at the frontier of the gravity model. The study sought to estimate the coefficients of the gravity model. The estimated coefficients were used to estimate merchandise export potentials and technical efficiency of each country in the sample and these were also aggregated to measure impact of country groups, RTAs and inter-regional trading agreements. Result of the ...

  15. On Cooper's Nonparametric Test.

    Science.gov (United States)

    Schmeidler, James

    1978-01-01

    The basic assumption of Cooper's nonparametric test for trend (EJ 125 069) is questioned. It is contended that the proper assumption alters the distribution of the statistic and reduces its usefulness. (JKS)

  16. Bayesian nonparametric generative models for causal inference with missing at random covariates.

    Science.gov (United States)

    Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J

    2018-03-26

    We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.

  17. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    Science.gov (United States)

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  18. Nonparametric Bayesian density estimation on manifolds with applications to planar shapes.

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David B

    2010-12-01

    Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback-Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.

  19. Genomic outlier profile analysis: mixture models, null hypotheses, and nonparametric estimation.

    Science.gov (United States)

    Ghosh, Debashis; Chinnaiyan, Arul M

    2009-01-01

    In most analyses of large-scale genomic data sets, differential expression analysis is typically assessed by testing for differences in the mean of the distributions between 2 groups. A recent finding by Tomlins and others (2005) is of a different type of pattern of differential expression in which a fraction of samples in one group have overexpression relative to samples in the other group. In this work, we describe a general mixture model framework for the assessment of this type of expression, called outlier profile analysis. We start by considering the single-gene situation and establishing results on identifiability. We propose 2 nonparametric estimation procedures that have natural links to familiar multiple testing procedures. We then develop multivariate extensions of this methodology to handle genome-wide measurements. The proposed methodologies are compared using simulation studies as well as data from a prostate cancer gene expression study.

  20. Stochastic Production Frontier Models to Explore Constraints on Household Travel Expenditures Considering Household Income Classes

    Directory of Open Access Journals (Sweden)

    Sofyan M. Saleh

    2016-04-01

    Full Text Available This paper explores the variation of household travel expenditure frontiers (HTEFs prior to CC reform in Jakarta. This study incorporates the variation of household income classes into the modeling of HTEFs and investigates the degree to which various determinants influence levels of HTEF. The HTEF is defined as an unseen maximum (capacity amount of money that a certain income class is willing to dedicate to their travel. A stochastic production frontier is applied to model and explore upper bound household travel expenditure (HTE. Using a comprehensive household travel survey (HTS in Jakarta in 2004, the observed HTE spending in a month is treated as an exogenous variable. The estimation results obtained using three proposed models, for low, medium and high income classes, show that HTEFs are significantly associated with life stage structure attributes, socio-demographics and life environment factors such as professional activity engagements, which is disclosed to be varied across income classes. Finding further reveals that considerable differences in average of HTEFs across models. This finding calls for the formulation of policies that consider the needs to be addressed for low and medium income groups in order to promote more equity policy thereby leading to more acceptable CC reform.

  1. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses.

    Science.gov (United States)

    Zhang, Linlin; Guindani, Michele; Versace, Francesco; Vannucci, Marina

    2014-07-15

    In this paper we present a novel wavelet-based Bayesian nonparametric regression model for the analysis of functional magnetic resonance imaging (fMRI) data. Our goal is to provide a joint analytical framework that allows to detect regions of the brain which exhibit neuronal activity in response to a stimulus and, simultaneously, infer the association, or clustering, of spatially remote voxels that exhibit fMRI time series with similar characteristics. We start by modeling the data with a hemodynamic response function (HRF) with a voxel-dependent shape parameter. We detect regions of the brain activated in response to a given stimulus by using mixture priors with a spike at zero on the coefficients of the regression model. We account for the complex spatial correlation structure of the brain by using a Markov random field (MRF) prior on the parameters guiding the selection of the activated voxels, therefore capturing correlation among nearby voxels. In order to infer association of the voxel time courses, we assume correlated errors, in particular long memory, and exploit the whitening properties of discrete wavelet transforms. Furthermore, we achieve clustering of the voxels by imposing a Dirichlet process (DP) prior on the parameters of the long memory process. For inference, we use Markov Chain Monte Carlo (MCMC) sampling techniques that combine Metropolis-Hastings schemes employed in Bayesian variable selection with sampling algorithms for nonparametric DP models. We explore the performance of the proposed model on simulated data, with both block- and event-related design, and on real fMRI data. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems

    Science.gov (United States)

    Merkatas, Christos; Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2017-06-01

    We propose a Bayesian nonparametric mixture model for the reconstruction and prediction from observed time series data, of discretized stochastic dynamical systems, based on Markov Chain Monte Carlo methods. Our results can be used by researchers in physical modeling interested in a fast and accurate estimation of low dimensional stochastic models when the size of the observed time series is small and the noise process (perhaps) is non-Gaussian. The inference procedure is demonstrated specifically in the case of polynomial maps of an arbitrary degree and when a Geometric Stick Breaking mixture process prior over the space of densities, is applied to the additive errors. Our method is parsimonious compared to Bayesian nonparametric techniques based on Dirichlet process mixtures, flexible and general. Simulations based on synthetic time series are presented.

  3. A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems.

    Science.gov (United States)

    Merkatas, Christos; Kaloudis, Konstantinos; Hatjispyros, Spyridon J

    2017-06-01

    We propose a Bayesian nonparametric mixture model for the reconstruction and prediction from observed time series data, of discretized stochastic dynamical systems, based on Markov Chain Monte Carlo methods. Our results can be used by researchers in physical modeling interested in a fast and accurate estimation of low dimensional stochastic models when the size of the observed time series is small and the noise process (perhaps) is non-Gaussian. The inference procedure is demonstrated specifically in the case of polynomial maps of an arbitrary degree and when a Geometric Stick Breaking mixture process prior over the space of densities, is applied to the additive errors. Our method is parsimonious compared to Bayesian nonparametric techniques based on Dirichlet process mixtures, flexible and general. Simulations based on synthetic time series are presented.

  4. The Hubble Frontier Fields: Engaging Multiple Audiences in Exploring the Cosmic Frontier

    Science.gov (United States)

    Lawton, Brandon L.; Smith, Denise A.; Summers, Frank; Ryer, Holly; Slivinski, Carolyn; Lotz, Jennifer M.

    2017-06-01

    The Hubble Frontier Fields is a multi-cycle program of six deep-field observations of strong-lensing galaxy clusters taken in parallel with six deep “blank fields.” The three-year long collaborative program began in late 2013 and is led by observations from NASA’s Great Observatories. The observations, now complete, allow astronomers to look deeper into the universe than ever before, and potentially uncover galaxies that are as much as 100 times fainter than what the telescopes can typically observe. The Frontier Fields science program is ideal for informing audiences about scientific advances and topics in STEM. The study of galaxy properties, statistics, optics, and Einstein’s theory of general relativity naturally leverages off of the science returns of the Frontier Fields program. As a result, the Space Telescope Science Institute’s Office of Public Outreach (OPO) has engaged multiple audiences over the past three years to follow the progress of the Frontier Fields.For over two decades, the STScI outreach program has sought to bring the wonders of the universe to the public and engage audiences in the adventure of scientific discovery. In addition, we are leveraging the reach of the new NASA’s Universe of Learning education program to bring the science of the Frontier Fields to informal education audiences. The main underpinnings of the STScI outreach program and the Universe of Learning education program are scientist-educator development teams, partnerships, and an embedded program evaluation component. OPO is leveraging the infrastructure of these education and outreach programs to bring the Frontier Fields science program to the education community and the public in a cost-effective way.This talk will feature highlights over the past three years of the program. We will highlight OPO’s strategies and infrastructure that allows for the quick delivery of groundbreaking science to the education community and public.

  5. Economic decision making and the application of nonparametric prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  6. Where is the Efficient Frontier

    OpenAIRE

    Jing Chen

    2010-01-01

    Tremendous effort has been spent on the construction of reliable efficient frontiers. However, mean-variance efficient portfolios constructed using sample means and covariance often perform poorly out of sample. We prove that, the capital market line is the efficient frontier for the risky assets in a financial market with liquid fixed income trading. This unified understanding of riskless asset as the boundary of risky assets relieves the burden of constructing efficient frontiers in asset a...

  7. AHP 21: Review: China's Last Imperial Frontier and The Sichuan Frontier and Tibet

    Directory of Open Access Journals (Sweden)

    Robert Entenmann

    2012-12-01

    Full Text Available Until recently, historians have not paid much attention to Qing China's Tibetan frontier, but two excellent new studies address this neglect. One is Yingcong Dai's The Sichuan Frontier and Tibet: Imperial Strategy in the Early Qing, which examines the Qing conquest of the Khams region up to the end of the eighteenth century and its effect on Sichuan. The other is China's Last Imperial Frontier: Late Qing Expansion in Sichuan's Tibetan Borderlands by Xiuyu Wang, in which he describes Qing efforts to impose direct administration on the Khams region in the last years of the dynasty. ...

  8. Network frontier as a metaphor and myth

    Directory of Open Access Journals (Sweden)

    N V Plotichkina

    2017-12-01

    Full Text Available This article considers spatial metaphors of the Internet and the possibility to extrapolate the frontier thesis of F. Turner on the electronic space. The authors believe that information and communication technologies and the digital world have become new spaces for the expansion of states or individuals. That is why there are ongoing scientific debates on the limits and potential of western and electronic frontiers’ metaphors for analytical description of the digital space. The metaphor of the Internet as a western frontier is quite controversial; many authors prefer the electronic frontier analogy as more heuristic and valid for constructing metaphors of the digital reality. The network frontier is defined as a dynamic, elastic and permeable border of social and cultural practices of the network society. The authors estimate the heuristic potential of the concept ‘network frontier’ developed on the basis of integration of the frontier theory and the concept ‘network society’, taking into account the effects of globalization for the study of elastic, permeable and movable border of the network landscape. In the digital world, the spatiality transforms, the geography of the Internet network determines the metamorphosis of the frontier as a contact zone between online and offline spaces, which is dynamic, innovative, encourages mobility, and its permeability depends on the digital competence of citizens. The authors explain the mythology of western and electronic frontier; name the main network frontier myths related to the rhetoric of western frontier myth; describe the main components of the western frontier myth associated with the idea of American exceptionalism; and conclude with the identification of nowadays myths about frontier-men and the online space they master.

  9. Nonparametric Bayesian inference for multidimensional compound Poisson processes

    NARCIS (Netherlands)

    Gugushvili, S.; van der Meulen, F.; Spreij, P.

    2015-01-01

    Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density r0 and intensity λ0. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context,

  10. Particle Physics at the Cosmic, Intensity, and Energy Frontiers

    Energy Technology Data Exchange (ETDEWEB)

    Essig, Rouven

    2018-04-06

    Major efforts at the Intensity, Cosmic, and Energy frontiers of particle physics are rapidly furthering our understanding of the fundamental constituents of Nature and their interactions. The overall objectives of this research project are (1) to interpret and develop the theoretical implications of the data collected at these frontiers and (2) to provide the theoretical motivation, basis, and ideas for new experiments and for new analyses of experimental data. Within the Intensity Frontier, an experimental search for a new force mediated by a GeV-scale gauge boson will be carried out with the $A'$ Experiment (APEX) and the Heavy Photon Search (HPS), both at Jefferson Laboratory. Within the Cosmic Frontier, contributions are planned to the search for dark matter particles with the Fermi Gamma-ray Space Telescope and other instruments. A detailed exploration will also be performed of new direct detection strategies for dark matter particles with sub-GeV masses to facilitate the development of new experiments. In addition, the theoretical implications of existing and future dark matter-related anomalies will be examined. Within the Energy Frontier, the implications of the data from the Large Hadron Collider will be investigated. Novel search strategies will be developed to aid the search for new phenomena not described by the Standard Model of particle physics. By combining insights from all three particle physics frontiers, this research aims to increase our understanding of fundamental particle physics.

  11. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  12. Bayesian nonparametric estimation of continuous monotone functions with applications to dose-response analysis.

    Science.gov (United States)

    Bornkamp, Björn; Ickstadt, Katja

    2009-03-01

    In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.

  13. A Nonparametric Operational Risk Modeling Approach Based on Cornish-Fisher Expansion

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhu

    2014-01-01

    Full Text Available It is generally accepted that the choice of severity distribution in loss distribution approach has a significant effect on the operational risk capital estimation. However, the usually used parametric approaches with predefined distribution assumption might be not able to fit the severity distribution accurately. The objective of this paper is to propose a nonparametric operational risk modeling approach based on Cornish-Fisher expansion. In this approach, the samples of severity are generated by Cornish-Fisher expansion and then used in the Monte Carlo simulation to sketch the annual operational loss distribution. In the experiment, the proposed approach is employed to calculate the operational risk capital charge for the overall Chinese banking. The experiment dataset is the most comprehensive operational risk dataset in China as far as we know. The results show that the proposed approach is able to use the information of high order moments and might be more effective and stable than the usually used parametric approach.

  14. Inference from concave stochastic frontiers and the covariance of firm efficiency measures across firms

    International Nuclear Information System (INIS)

    Dashti, Imad

    2003-01-01

    This paper uses a Bayesian stochastic frontier model to obtain confidence intervals on firm efficiency measures of electric utilities rather than the point estimates reported in most previous studies. Results reveal that the stochastic frontier model yields imprecise measures of firm efficiency. However, the application produces much more precise inference on pairwise efficiency comparisons of firms due to a sometimes strong positive covariance of efficiency measures across firms. In addition, we examine the sensitivity to functional form by repeating the analysis for Cobb-Douglas, translog and Fourier frontiers, with and without imposing monotonicity and concavity

  15. Analysis about the development of mobile electronic commerce: An application of production possibility frontier model

    OpenAIRE

    Uesugi, Shiro; Okada, Hitoshi

    2012-01-01

    This study aims to further develop our previous research on production possibility frontier model (PPFM). An application of model to provide analysis on the mobile commerce survey for which data was collected in Japan und Thailand is presented. PPFM looks into the consumer behaviors as the results form the perception on the relationship between Convenience and Privacy Concerns of certain electronic commerce services. From the data of consumer surveys, PPFM is expected to provide practical sol...

  16. Decompounding random sums: A nonparametric approach

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...

  17. Scalable Bayesian nonparametric measures for exploring pairwise dependence via Dirichlet Process Mixtures.

    Science.gov (United States)

    Filippi, Sarah; Holmes, Chris C; Nieto-Barajas, Luis E

    2016-11-16

    In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets.

  18. A Bayesian nonparametric estimation of distributions and quantiles

    International Nuclear Information System (INIS)

    Poern, K.

    1988-11-01

    The report describes a Bayesian, nonparametric method for the estimation of a distribution function and its quantiles. The method, presupposing random sampling, is nonparametric, so the user has to specify a prior distribution on a space of distributions (and not on a parameter space). In the current application, where the method is used to estimate the uncertainty of a parametric calculational model, the Dirichlet prior distribution is to a large extent determined by the first batch of Monte Carlo-realizations. In this case the results of the estimation technique is very similar to the conventional empirical distribution function. The resulting posterior distribution is also Dirichlet, and thus facilitates the determination of probability (confidence) intervals at any given point in the space of interest. Another advantage is that also the posterior distribution of a specified quantitle can be derived and utilized to determine a probability interval for that quantile. The method was devised for use in the PROPER code package for uncertainty and sensitivity analysis. (orig.)

  19. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    International Nuclear Information System (INIS)

    Barat, Eric; Dautremer, Thomas

    2007-01-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on R k (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM

  20. Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system

    Science.gov (United States)

    Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.

  1. Single versus mixture Weibull distributions for nonparametric satellite reliability

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Long recognized as a critical design attribute for space systems, satellite reliability has not yet received the proper attention as limited on-orbit failure data and statistical analyses can be found in the technical literature. To fill this gap, we recently conducted a nonparametric analysis of satellite reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we provide an advanced parametric fit, based on mixture of Weibull distributions, and compare it with the single Weibull distribution model obtained with the Maximum Likelihood Estimation (MLE) method. We demonstrate that both parametric fits are good approximations of the nonparametric satellite reliability, but that the mixture Weibull distribution provides significant accuracy in capturing all the failure trends in the failure data, as evidenced by the analysis of the residuals and their quasi-normal dispersion.

  2. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  3. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  4. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  5. New frontiers for tomorrow's world

    International Nuclear Information System (INIS)

    Kassler, P.

    1994-01-01

    The conference paper deals with new frontiers and barricades in the global economic development and their influence on fuel consumption and energy source development. Topics discussed are incremental energy supply - new frontiers, world car population - new frontiers, OPEC crude production capacity vs call on OPEC, incremental world oil demand by region 1992-2000, oil resource cost curve, progress in seismic 1983-1991, Troll picture, cost reduction in renewables, sustained growth scenario, nuclear electricity capacity - France, OECD road transport fuels - barricades, and energy taxation. 18 figs

  6. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    Science.gov (United States)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2011-05-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation

  7. Nonparametric Tree-Based Predictive Modeling of Storm Outages on an Electric Distribution Network.

    Science.gov (United States)

    He, Jichao; Wanik, David W; Hartman, Brian M; Anagnostou, Emmanouil N; Astitha, Marina; Frediani, Maria E B

    2017-03-01

    This article compares two nonparametric tree-based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high-resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2-km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree-leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources. © 2016 Society for Risk Analysis.

  8. Ghana's cocoa frontier in transition

    DEFF Research Database (Denmark)

    Knudsen, Michael Helt; Agergaard, Jytte

    2015-01-01

    Since the first commercial planting of cocoa in Ghana more than a century ago, the production of cocoa has been a key factor in the redistribution of migrants and has played a pivotal role in the development of both sending and receiving communities. This process has been acknowledged...... Region, this article aims to examine how immigration and frontier dynamics in the Western region are contributing to livelihood transitions and small town development, and how this process is gradually becoming delinked from the production of cocoa. The article focuses on how migration dynamics interlink...... in the literature for decades. However, how migration flows have changed in response to changing livelihoods dynamics of the frontier and how this has impacted on the development of the frontier has only attracted limited attention. Based on a study of immigration to Ghana's current cocoa frontier in the Western...

  9. Frontier Fields: Engaging Educators, the Youth, and the Public in Exploring the Cosmic Frontier

    Science.gov (United States)

    Lawton, Brandon L.; Eisenhamer, Bonnie; Smith, Denise A.; Summers, Frank; Darnell, John A.; Ryer, Holly

    2015-01-01

    The Frontier Fields is a multi-cycle program of six deep-field observations of strong-lensing galaxy clusters that will be taken in parallel with six deep 'blank fields.' The three-year long collaborative program is led by observations from NASA's Great Observatories. The observations allow astronomers to look deeper into the universe than ever before, and potentially uncover galaxies that are as much as 100 times fainter than what the telescopes can typically observe. The Frontier Fields science program is ideal for informing audiences about scientific advances and topics in STEM. The study of galaxy properties, statistics, optics, and Einstein's theory of general relativity naturally leverages off of the science returns of the Frontier Fields program. As a result, the Space Telescope Science Institute's Office of Public Outreach (OPO) has initiated an education and public outreach (EPO) project to follow the progress of the Frontier Fields.For over two decades, the Hubble EPO program has sought to bring the wonders of the universe to the education community, the youth, and the public, and engage audiences in the adventure of scientific discovery. Program components include standards-based curriculum-support materials, exhibits and exhibit components, professional development workshops, and direct interactions with scientists. We are also leveraging our new social media strategy to bring the science program to the public in the form of an ongoing blog. The main underpinnings of the program's infrastructure are scientist-educator development teams, partnerships, and an embedded program evaluation component. OPO is leveraging this existing infrastructure to bring the Frontier Fields science program to the education community and the public in a cost-effective way.The Frontier Fields program has just completed its first year. This talk will feature the goals and current status of the Frontier Fields EPO program. We will highlight OPO's strategies and infrastructure

  10. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  11. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    OpenAIRE

    Li, Zhanchao; Gu, Chongshi; Wu, Zhongru

    2013-01-01

    The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...

  12. Bayesian Nonparametric Hidden Markov Models with application to the analysis of copy-number-variation in mammalian genomes.

    Science.gov (United States)

    Yau, C; Papaspiliopoulos, O; Roberts, G O; Holmes, C

    2011-01-01

    We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.

  13. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    Science.gov (United States)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  14. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  15. The Climate Adaptation Frontier

    Energy Technology Data Exchange (ETDEWEB)

    Preston, Benjamin L [ORNL

    2013-01-01

    Climate adaptation has emerged as a mainstream risk management strategy for assisting in maintaining socio-ecological systems within the boundaries of a safe operating space. Yet, there are limits to the ability of systems to adapt. Here, we introduce the concept of an adaptation frontier , which is defined as a socio-ecological system s transitional adaptive operating space between safe and unsafe domains. A number of driving forces are responsible for determining the sustainability of systems on the frontier. These include path dependence, adaptation/development deficits, values conflicts and discounting of future loss and damage. The cumulative implications of these driving forces are highly uncertain. Nevertheless, the fact that a broad range of systems already persist at the edge of their frontiers suggests a high likelihood that some limits will eventually be exceeded. The resulting system transformation is likely to manifest as anticipatory modification of management objectives or loss and damage. These outcomes vary significantly with respect to their ethical implications. Successful navigation of the adaptation frontier will necessitate new paradigms of risk governance to elicit knowledge that encourages reflexive reevaluation of societal values that enable or constrain sustainability.

  16. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    Science.gov (United States)

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…

  17. Developing an immigration policy for Germany on the basis of a nonparametric labor market classification

    OpenAIRE

    Froelich, Markus; Puhani, Patrick

    2004-01-01

    Based on a nonparametrically estimated model of labor market classifications, this paper makes suggestions for immigration policy using data from western Germany in the 1990s. It is demonstrated that nonparametric regression is feasible in higher dimensions with only a few thousand observations. In sum, labor markets able to absorb immigrants are characterized by above average age and by professional occupations. On the other hand, labor markets for young workers in service occupations are id...

  18. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo; Genton, Marc G.

    2013-01-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric

  19. R-U policy frontiers for health data de-identification

    Science.gov (United States)

    Heatherly, Raymond; Ding, Xiaofeng; Li, Jiuyong; Malin, Bradley A

    2015-01-01

    Objective The Health Insurance Portability and Accountability Act Privacy Rule enables healthcare organizations to share de-identified data via two routes. They can either 1) show re-identification risk is small (e.g., via a formal model, such as k-anonymity) with respect to an anticipated recipient or 2) apply a rule-based policy (i.e., Safe Harbor) that enumerates attributes to be altered (e.g., dates to years). The latter is often invoked because it is interpretable, but it fails to tailor protections to the capabilities of the recipient. The paper shows rule-based policies can be mapped to a utility (U) and re-identification risk (R) space, which can be searched for a collection, or frontier, of policies that systematically trade off between these goals. Methods We extend an algorithm to efficiently compose an R-U frontier using a lattice of policy options. Risk is proportional to the number of patients to which a record corresponds, while utility is proportional to similarity of the original and de-identified distribution. We allow our method to search 20 000 rule-based policies (out of 2700) and compare the resulting frontier with k-anonymous solutions and Safe Harbor using the demographics of 10 U.S. states. Results The results demonstrate the rule-based frontier 1) consists, on average, of 5000 policies, 2% of which enable better utility with less risk than Safe Harbor and 2) the policies cover a broader spectrum of utility and risk than k-anonymity frontiers. Conclusions R-U frontiers of de-identification policies can be discovered efficiently, allowing healthcare organizations to tailor protections to anticipated needs and trustworthiness of recipients. PMID:25911674

  20. Frontier lands: Oil and gas statistical overview, 1992

    International Nuclear Information System (INIS)

    1993-01-01

    Canada's frontier lands consist of offshore and onshore areas outside the provinces which fall under federal authority. These lands cover some 10.2 million km 2 and include the Northwest Territories, Yukon Territory, and areas off the east and west coasts and in the far north. A statistical summary is presented of oil and gas activities in these frontier lands for 1992. Information provided includes activity status and wells drilled on frontier lands, a resource inventory, oil and gas production, land holdings and status, licenses concluded, petroleum-related employment on frontier lands, and petroleum expenditures on frontier lands. Highlights of activities include the first commercial production of crude oil from the Panuke oil field on the Scotian Shelf; a continued decrease in exploration activity on the frontier lands; the introduction of legislation to eliminate restrictions on foreign ownership of production licences on frontier lands; and the resolution to the Canada-France maritime boundary dispute by the International Court of Arbitration. 9 figs., 10 tabs

  1. Internet based benchmarking

    DEFF Research Database (Denmark)

    Bogetoft, Peter; Nielsen, Kurt

    2005-01-01

    We discuss the design of interactive, internet based benchmarking using parametric (statistical) as well as nonparametric (DEA) models. The user receives benchmarks and improvement potentials. The user is also given the possibility to search different efficiency frontiers and hereby to explore...

  2. Marginal Bayesian nonparametric model for time to disease arrival of threatened amphibian populations.

    Science.gov (United States)

    Zhou, Haiming; Hanson, Timothy; Knapp, Roland

    2015-12-01

    The global emergence of Batrachochytrium dendrobatidis (Bd) has caused the extinction of hundreds of amphibian species worldwide. It has become increasingly important to be able to precisely predict time to Bd arrival in a population. The data analyzed herein present a unique challenge in terms of modeling because there is a strong spatial component to Bd arrival time and the traditional proportional hazards assumption is grossly violated. To address these concerns, we develop a novel marginal Bayesian nonparametric survival model for spatially correlated right-censored data. This class of models assumes that the logarithm of survival times marginally follow a mixture of normal densities with a linear-dependent Dirichlet process prior as the random mixing measure, and their joint distribution is induced by a Gaussian copula model with a spatial correlation structure. To invert high-dimensional spatial correlation matrices, we adopt a full-scale approximation that can capture both large- and small-scale spatial dependence. An efficient Markov chain Monte Carlo algorithm with delayed rejection is proposed for posterior computation, and an R package spBayesSurv is provided to fit the model. This approach is first evaluated through simulations, then applied to threatened frog populations in Sequoia-Kings Canyon National Park. © 2015, The International Biometric Society.

  3. An adaptive distance measure for use with nonparametric models

    International Nuclear Information System (INIS)

    Garvey, D. R.; Hines, J. W.

    2006-01-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  4. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  5. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  6. Separating environmental efficiency into production and abatement efficiency. A nonparametric model with application to U.S. power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hampf, Benjamin

    2011-08-15

    In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.

  7. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  8. Recent Advances and Trends in Nonparametric Statistics

    CERN Document Server

    Akritas, MG

    2003-01-01

    The advent of high-speed, affordable computers in the last two decades has given a new boost to the nonparametric way of thinking. Classical nonparametric procedures, such as function smoothing, suddenly lost their abstract flavour as they became practically implementable. In addition, many previously unthinkable possibilities became mainstream; prime examples include the bootstrap and resampling methods, wavelets and nonlinear smoothers, graphical methods, data mining, bioinformatics, as well as the more recent algorithmic approaches such as bagging and boosting. This volume is a collection o

  9. Conventional - Frontier and east coast supply

    International Nuclear Information System (INIS)

    Morrell, G.R.

    1998-01-01

    An assessment of frontier basins in Canada with proven potential for petroleum resources was provided. A prediction of which frontier basin will become a major supplier of conventional light oil was made by examining where companies are investing in frontier exploration today. Frontier land values for five active frontier areas were discussed. These included the Grand Banks of Newfoundland, Nova Scotia Offshore, Western Newfoundland, the southern Northwest Territories and the Central Mackenzie Valley. The focus of this presentation was on three of these regions which are actually producing: Newfoundland's Grand Banks, offshore Nova Scotia and the Mackenzie Valley. Activities in each of these areas were reviewed. The Canada-Newfoundland Offshore Petroleum Board has listed Hibernia's reserves at 666 million barrels. The Sable Offshore Energy Project on the continental shelf offshore Nova Scotia proposes to develop 5.4 tcf of gas plus 75 million barrels of NGLs over a project life of 14 years. In the Mackenzie Valley there are at least three petroleum systems, including the 235 million barrel pool at Norman Wells. 8 refs., 1 tab., 3 figs

  10. Assessing pupil and school performance by non-parametric and parametric techniques

    NARCIS (Netherlands)

    de Witte, K.; Thanassoulis, E.; Simpson, G.; Battisti, G.; Charlesworth-May, A.

    2010-01-01

    This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall

  11. Impulse response identification with deterministic inputs using non-parametric methods

    International Nuclear Information System (INIS)

    Bhargava, U.K.; Kashyap, R.L.; Goodman, D.M.

    1985-01-01

    This paper addresses the problem of impulse response identification using non-parametric methods. Although the techniques developed herein apply to the truncated, untruncated, and the circulant models, we focus on the truncated model which is useful in certain applications. Two methods of impulse response identification will be presented. The first is based on the minimization of the C/sub L/ Statistic, which is an estimate of the mean-square prediction error; the second is a Bayesian approach. For both of these methods, we consider the effects of using both the identity matrix and the Laplacian matrix as weights on the energy in the impulse response. In addition, we present a method for estimating the effective length of the impulse response. Estimating the length is particularly important in the truncated case. Finally, we develop a method for estimating the noise variance at the output. Often, prior information on the noise variance is not available, and a good estimate is crucial to the success of estimating the impulse response with a nonparametric technique

  12. An Evaluation of Parametric and Nonparametric Models of Fish Population Response.

    Energy Technology Data Exchange (ETDEWEB)

    Haas, Timothy C.; Peterson, James T.; Lee, Danny C.

    1999-11-01

    Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.

  13. Landscape of Future Accelerators at the Energy and Intensity Frontier

    Energy Technology Data Exchange (ETDEWEB)

    Syphers, M. J. [Northern Illinois U.; Chattopadhyay, S. [Northern Illinois U.

    2016-11-21

    An overview is provided of the currently envisaged landscape of charged particle accelerators at the energy and intensity frontiers to explore particle physics beyond the standard model via 1-100 TeV-scale lepton and hadron colliders and multi-Megawatt proton accelerators for short- and long- baseline neutrino experiments. The particle beam physics, associated technological challenges and progress to date for these accelerator facilities (LHC, HL-LHC, future 100 TeV p-p colliders, Tev-scale linear and circular electron-positron colliders, high intensity proton accelerator complex PIP-II for DUNE and future upgrade to PIP-III) are outlined. Potential and prospects for advanced “nonlinear dynamic techniques” at the multi-MW level intensity frontier and advanced “plasma- wakefield-based techniques” at the TeV-scale energy frontier and are also described.

  14. Frontier search to slow in Indonesia

    International Nuclear Information System (INIS)

    Land, R.

    1992-01-01

    This paper reports that oil and gas exploration in Indonesia likely will begin refocusing on proven areas after 1993, Arthur Andersen and Co.'s Far East oil analyst predicts. Arthur Andersen's James Sales the disappointing exploration results, outdated production sharing contract (PSC) terms, and low oil prices are discouraging companies from exploring frontier acreage. But Indonesian frontier activity during 1992-93 is likely to remain high because a large number of PSCs awarded in the past 2 years by state owned Pertamina cover high risk frontier areas. PSC contractors have disclosed several discoveries on recently awarded frontier tracts. However, the discoveries have been relatively small and far from pipeline infrastructure. With prevailing low oil prices, Pertamina likely will find it difficult to entice companies to extend PSCs or joint operating agreements beyond minimum exploration commitments

  15. Historic Frontier Processes active in Future Space-Based Mineral Extraction

    Science.gov (United States)

    Gray, D. M.

    2000-01-01

    The forces that shaped historic mining frontiers are in many cases not bound by geographic or temporal limits. The forces that helped define historic frontiers are active in today's physical and virtual frontiers, and will be present in future space-based frontiers. While frontiers derived from position and technology are primarily economic in nature, non-economic conditions affect the success or failure of individual frontier endeavors, local "mining camps" and even entire frontiers. Frontiers can be defined as the line of activity that divides the established markets and infrastructure of civilization from the unclaimed resources and potential wealth of a wilderness. At the frontier line, ownership of resources is established. The resource can then be developed using capital, energy and information. In a mining setting, the resource is concentrated for economic shipment to the markets of civilization. Profits from the sale of the resource are then used to fund further development of the resource and/or pay investors. Both positional and technical frontiers develop as a series of generations. The profits from each generation of development provides the capital and/or investment incentive for the next round of development. Without profit, the self-replicating process of frontiers stops.

  16. Migrant decision-making in a frontier landscape

    Science.gov (United States)

    Salerno, Jonathan

    2016-04-01

    Across the tropics, rural farmers and livestock keepers use mobility as an adaptive livelihood strategy. Continued migration to and within frontier areas is widely viewed as a driver of environmental decline and biodiversity loss. Recent scholarship advances our understanding of migration decision-making in the context of changing climate and environments, and in doing so it highlights the variation in migration responses to primarily economic and environmental factors. Building on these insights, this letter investigates past and future migration decisions in a frontier landscape of Tanzania, East Africa. Combining field observations and household data within a multilevel modeling framework, the letter analyzes the explicit importance of social factors relative to economic and environmental factors in driving decisions to migrate or remain. Results indeed suggest that local community ties and non-local social networks drive both immobility and anticipated migration, respectively. In addition, positive interactions with local protected natural resource areas promote longer-term residence. Findings shed new light on how frontier areas transition to human dominated landscapes. This highlights critical links between migration behavior and the conservation of biodiversity and management of natural resources, as well as how migrants evolve to become integrated into communities.

  17. New frontiers for tomorrow`s world

    Energy Technology Data Exchange (ETDEWEB)

    Kassler, P [Shell International Petroleum Co. Ltd., London (United Kingdom)

    1994-12-31

    The conference paper deals with new frontiers and barricades in the global economic development and their influence on fuel consumption and energy source development. Topics discussed are incremental energy supply - new frontiers, world car population - new frontiers, OPEC crude production capacity vs call on OPEC, incremental world oil demand by region 1992-2000, oil resource cost curve, progress in seismic 1983-1991, Troll picture, cost reduction in renewables, sustained growth scenario, nuclear electricity capacity - France, OECD road transport fuels - barricades, and energy taxation. 18 figs.

  18. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  19. Nonparametric analysis of blocked ordered categories data: some examples revisited

    Directory of Open Access Journals (Sweden)

    O. Thas

    2006-08-01

    Full Text Available Nonparametric analysis for general block designs can be given by using the Cochran-Mantel-Haenszel (CMH statistics. We demonstrate this with four examples and note that several well-known nonparametric statistics are special cases of CMH statistics.

  20. Nonparametric inference of network structure and dynamics

    Science.gov (United States)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  1. Cost efficiency of Japanese steam power generation companies: A Bayesian comparison of random and fixed frontier models

    Energy Technology Data Exchange (ETDEWEB)

    Assaf, A. George [Isenberg School of Management, University of Massachusetts-Amherst, 90 Campus Center Way, Amherst 01002 (United States); Barros, Carlos Pestana [Instituto Superior de Economia e Gestao, Technical University of Lisbon, Rua Miguel Lupi, 20, 1249-078 Lisbon (Portugal); Managi, Shunsuke [Graduate School of Environmental Studies, Tohoku University, 6-6-20 Aramaki-Aza Aoba, Aoba-Ku, Sendai 980-8579 (Japan)

    2011-04-15

    This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO{sub 2} emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results. (author)

  2. Nonparametric Inference for Periodic Sequences

    KAUST Repository

    Sun, Ying

    2012-02-01

    This article proposes a nonparametric method for estimating the period and values of a periodic sequence when the data are evenly spaced in time. The period is estimated by a "leave-out-one-cycle" version of cross-validation (CV) and complements the periodogram, a widely used tool for period estimation. The CV method is computationally simple and implicitly penalizes multiples of the smallest period, leading to a "virtually" consistent estimator of integer periods. This estimator is investigated both theoretically and by simulation.We also propose a nonparametric test of the null hypothesis that the data have constantmean against the alternative that the sequence of means is periodic. Finally, our methodology is demonstrated on three well-known time series: the sunspots and lynx trapping data, and the El Niño series of sea surface temperatures. © 2012 American Statistical Association and the American Society for Quality.

  3. A tradeoff frontier for global nitrogen use and cereal production

    International Nuclear Information System (INIS)

    Mueller, Nathaniel D; West, Paul C; Gerber, James S; MacDonald, Graham K; Foley, Jonathan A; Polasky, Stephen

    2014-01-01

    Nitrogen fertilizer use across the world’s croplands enables high-yielding agricultural production, but does so at considerable environmental cost. Imbalances between nitrogen applied and nitrogen used by crops contributes to excess nitrogen in the environment, with negative consequences for water quality, air quality, and climate change. Here we utilize crop input-yield models to investigate how to minimize nitrogen application while achieving crop production targets. We construct a tradeoff frontier that estimates the minimum nitrogen fertilizer needed to produce a range of maize, wheat, and rice production levels. Additionally, we explore potential environmental consequences by calculating excess nitrogen along the frontier using a soil surface nitrogen balance model. We find considerable opportunity to achieve greater production and decrease both nitrogen application and post-harvest excess nitrogen. Our results suggest that current (circa 2000) levels of cereal production could be achieved with ∼50% less nitrogen application and ∼60% less excess nitrogen. If current global nitrogen application were held constant but spatially redistributed, production could increase ∼30%. If current excess nitrogen were held constant, production could increase ∼40%. Efficient spatial patterns of nitrogen use on the frontier involve substantial reductions in many high-use areas and moderate increases in many low-use areas. Such changes may be difficult to achieve in practice due to infrastructure, economic, or political constraints. Increases in agronomic efficiency would expand the frontier to allow greater production and environmental gains

  4. Analyzing cost efficient production behavior under economies of scope : A nonparametric methodology

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2008-01-01

    In designing a production model for firms that generate multiple outputs, we take as a starting point that such multioutput production refers to economies of scope, which in turn originate from joint input use and input externalities. We provide a nonparametric characterization of cost-efficient

  5. Bayesian Nonparametric Longitudinal Data Analysis.

    Science.gov (United States)

    Quintana, Fernando A; Johnson, Wesley O; Waetjen, Elaine; Gold, Ellen

    2016-01-01

    Practical Bayesian nonparametric methods have been developed across a wide variety of contexts. Here, we develop a novel statistical model that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures. AR structure is often specified through the use of a Gaussian process (GP) with covariance functions that allow longitudinal data to be more correlated if they are observed closer in time than if they are observed farther apart. We allow for AR structure by considering a broader class of models that incorporates a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. We are able to take advantage of modern Bayesian statistical methods in making full predictive inferences and about characteristics of longitudinal profiles and their differences across covariate combinations. We also take advantage of the generality of our model, which provides for estimation of a variety of covariance structures. We observe that models that fail to incorporate CS or AR structure can result in very poor estimation of a covariance or correlation matrix. In our illustration using hormone data observed on women through the menopausal transition, biology dictates the use of a generalized family of sigmoid functions as a model for time trends across subpopulation categories.

  6. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  7. Non-parametric tests of productive efficiency with errors-in-variables

    NARCIS (Netherlands)

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  8. Fundamental Physics at the Intensity Frontier

    CERN Document Server

    Hewett, J.L.; Brock, R.; Butler, J.N.; Casey, B.C.K.; Collar, J.; de Gouvea, A.; Essig, R.; Grossman, Y.; Haxton, W.; Jaros, J.A.; Jung, C.K.; Lu, Z.T.; Pitts, K.; Ligeti, Z.; Patterson, J.R.; Ramsey-Musolf, M.; Ritchie, J.L.; Roodman, A.; Scholberg, K.; Wagner, C.E.M.; Zeller, G.P.; Aefsky, S.; Afanasev, A.; Agashe, K.; Albright, C.; Alonso, J.; Ankenbrandt, C.; Aoki, M.; Arguelles, C.A.; Arkani-Hamed, N.; Armendariz, J.R.; Armendariz-Picon, C.; Arrieta Diaz, E.; Asaadi, J.; Asner, D.M.; Babu, K.S.; Bailey, K.; Baker, O.; Balantekin, B.; Baller, B.; Bass, M.; Batell, B.; Beacham, J.; Behr, J.; Berger, N.; Bergevin, M.; Berman, E.; Bernstein, R.; Bevan, A.J.; Bishai, M.; Blanke, M.; Blessing, S.; Blondel, A.; Blum, T.; Bock, G.; Bodek, A.; Bonvicini, G.; Bossi, F.; Boyce, J.; Breedon, R.; Breidenbach, M.; Brice, S.J.; Briere, R.A.; Brodsky, S.; Bromberg, C.; Bross, A.; Browder, T.E.; Bryman, D.A.; Buckley, M.; Burnstein, R.; Caden, E.; Campana, P.; Carlini, R.; Carosi, G.; Castromonte, C.; Cenci, R.; Chakaberia, I.; Chen, Mu-Chun; Cheng, C.H.; Choudhary, B.; Christ, N.H.; Christensen, E.; Christy, M.E.; Chupp, T.E.; Church, E.; Cline, D.B.; Coan, T.E.; Coloma, P.; Comfort, J.; Coney, L.; Cooper, J.; Cooper, R.J.; Cowan, R.; Cowen, D.F.; Cronin-Hennessy, D.; Datta, A.; Davies, G.S.; Demarteau, M.; DeMille, D.P.; Denig, A.; Dermisek, R.; Deshpande, A.; Dewey, M.S.; Dharmapalan, R.; Dhooghe, J.; Dietrich, M.R.; Diwan, M.; Djurcic, Z.; Dobbs, S.; Duraisamy, M.; Dutta, Bhaskar; Duyang, H.; Dwyer, D.A.; Eads, M.; Echenard, B.; Elliott, S.R.; Escobar, C.; Fajans, J.; Farooq, S.; Faroughy, C.; Fast, J.E.; Feinberg, B.; Felde, J.; Feldman, G.; Fierlinger, P.; Fileviez Perez, P.; Filippone, B.; Fisher, P.; Flemming, B.T.; Flood, K.T.; Forty, R.; Frank, M.J.; Freyberger, A.; Friedland, A.; Gandhi, R.; Ganezer, K.S.; Garcia, A.; Garcia, F.G.; Gardner, S.; Garrison, L.; Gasparian, A.; Geer, S.; Gehman, V.M.; Gershon, T.; Gilchriese, M.; Ginsberg, C.; Gogoladze, I.; Gonderinger, M.; Goodman, M.; Gould, H.; Graham, M.; Graham, P.W.; Gran, R.; Grange, J.; Gratta, G.; Green, J.P.; Greenlee, H.; Group, R.C.; Guardincerri, E.; Gudkov, V.; Guenette, R.; Haas, A.; Hahn, A.; Han, T.; Handler, T.; Hardy, J.C.; Harnik, R.; Harris, D.A.; Harris, F.A.; Harris, P.G.; Hartnett, J.; He, B.; Heckel, B.R.; Heeger, K.M.; Henderson, S.; Hertzog, D.; Hill, R.; Hinds, E.A.; Hitlin, D.G.; Holt, R.J.; Holtkamp, N.; Horton-Smith, G.; Huber, P.; Huelsnitz, W.; Imber, J.; Irastorza, I.; Jaeckel, J.; Jaegle, I.; James, C.; Jawahery, A.; Jensen, D.; Jessop, C.P.; Jones, B.; Jostlein, H.; Junk, T.; Kagan, A.L.; Kalita, M.; Kamyshkov, Y.; Kaplan, D.M.; Karagiorgi, G.; Karle, A.; Katori, T.; Kayser, B.; Kephart, R.; Kettell, S.; Kim, Y.K.; Kirby, M.; Kirch, K.; Klein, J.; Kneller, J.; Kobach, A.; Kohl, M.; Kopp, J.; Kordosky, M.; Korsch, W.; Kourbanis, I.; Krisch, A.D.; Krizan, P.; Kronfeld, A.S.; Kulkarni, S.; Kumar, K.S.; Kuno, Y.; Kutter, T.; Lachenmaier, T.; Lamm, M.; Lancaster, J.; Lancaster, M.; Lane, C.; Lang, K.; Langacker, P.; Lazarevic, S.; Le, T.; Lee, K.; Lesko, K.T.; Li, Y.; Lindgren, M.; Lindner, A.; Link, J.; Lissauer, D.; Littenberg, L.S.; Littlejohn, B.; Liu, C.Y.; Loinaz, W.; Lorenzon, W.; Louis, W.C.; Lozier, J.; Ludovici, L.; Lueking, L.; Lunardini, C.; MacFarlane, D.B.; Machado, P.A.N.; Mackenzie, P.B.; Maloney, J.; Marciano, W.J.; Marsh, W.; Marshak, M.; Martin, J.W.; Mauger, C.; McFarland, K.S.; McGrew, C.; McLaughlin, G.; McKeen, D.; McKeown, R.; Meadows, B.T.; Mehdiyev, R.; Melconian, D.; Merkel, H.; Messier, M.; Miller, J.P.; Mills, G.; Minamisono, U.K.; Mishra, S.R.; Mocioiu, I.; Sher, S.Moed; Mohapatra, R.N.; Monreal, B.; Moore, C.D.; Morfin, J.G.; Mousseau, J.; Moustakas, L.A.; Mueller, G.; Mueller, P.; Muether, M.; Mumm, H.P.; Munger, C.; Murayama, H.; Nath, P.; Naviliat-Cuncin, O.; Nelson, J.K.; Neuffer, D.; Nico, J.S.; Norman, A.; Nygren, D.; Obayashi, Y.; O'Connor, T.P.; Okada, Y.; Olsen, J.; Orozco, L.; Orrell, J.L.; Osta, J.; Pahlka, B.; Paley, J.; Papadimitriou, V.; Papucci, M.; Parke, S.; Parker, R.H.; Parsa, Z.; Partyka, K.; Patch, A.; Pati, J.C.; Patterson, R.B.; Pavlovic, Z.; Paz, Gil; Perdue, G.N.; Perevalov, D.; Perez, G.; Petti, R.; Pettus, W.; Piepke, A.; Pivovaroff, M.; Plunkett, R.; Polly, C.C.; Pospelov, M.; Povey, R.; Prakesh, A.; Purohit, M.V.; Raby, S.; Raaf, J.L.; Rajendran, R.; Rajendran, S.; Rameika, G.; Ramsey, R.; Rashed, A.; Ratcliff, B.N.; Rebel, B.; Redondo, J.; Reimer, P.; Reitzner, D.; Ringer, F.; Ringwald, A.; Riordan, S.; Roberts, B.L.; Roberts, D.A.; Robertson, R.; Robicheaux, F.; Rominsky, M.; Roser, R.; Rosner, J.L.; Rott, C.; Rubin, P.; Saito, N.; Sanchez, M.; Sarkar, S.; Schellman, H.; Schmidt, B.; Schmitt, M.; Schmitz, D.W.; Schneps, J.; Schopper, A.; Schuster, P.; Schwartz, A.J.; Schwarz, M.; Seeman, J.; Semertzidis, Y.K.; Seth, K.K.; Shafi, Q.; Shanahan, P.; Sharma, R.; Sharpe, S.R.; Shiozawa, M.; Shiltsev, V.; Sigurdson, K.; Sikivie, P.; Singh, J.; Sivers, D.; Skwarnicki, T.; Smith, N.; Sobczyk, J.; Sobel, H.; Soderberg, M.; Song, Y.H.; Soni, A.; Souder, P.; Sousa, A.; Spitz, J.; Stancari, M.; Stavenga, G.C.; Steffen, J.H.; Stepanyan, S.; Stoeckinger, D.; Stone, S.; Strait, J.; Strassler, M.; Sulai, I.A.; Sundrum, R.; Svoboda, R.; Szczerbinska, B.; Szelc, A.; Takeuchi, T.; Tanedo, P.; Taneja, S.; Tang, J.; Tanner, D.B.; Tayloe, R.; Taylor, I.; Thomas, J.; Thorn, C.; Tian, X.; Tice, B.G.; Tobar, M.; Tolich, N.; Toro, N.; Towner, I.S.; Tsai, Y.; Tschirhart, R.; Tunnell, C.D.; Tzanov, M.; Upadhye, A.; Urheim, J.; Vahsen, S.; Vainshtein, A.; Valencia, E.; Van de Water, R.G.; Van de Water, R.S.; Velasco, M.; Vogel, J.; Vogel, P.; Vogelsang, W.; Wah, Y.W.; Walker, D.; Weiner, N.; Weltman, A.; Wendell, R.; Wester, W.; Wetstein, M.; White, C.; Whitehead, L.; Whitmore, J.; Widmann, E.; Wiedemann, G.; Wilkerson, J.; Wilkinson, G.; Wilson, P.; Wilson, R.J.; Winter, W.; Wise, M.B.; Wodin, J.; Wojcicki, S.; Wojtsekhowski, B.; Wongjirad, T.; Worcester, E.; Wurtele, J.; Xin, T.; Xu, J.; Yamanaka, T.; Yamazaki, Y.; Yavin, I.; Yeck, J.; Yeh, M.; Yokoyama, M.; Yoo, J.; Young, A.; Zimmerman, E.; Zioutas, K.; Zisman, M.; Zupan, J.; Zwaska, R.; Intensity Frontier Workshop

    2012-01-01

    The Proceedings of the 2011 workshop on Fundamental Physics at the Intensity Frontier. Science opportunities at the intensity frontier are identified and described in the areas of heavy quarks, charged leptons, neutrinos, proton decay, new light weakly-coupled particles, and nucleons, nuclei, and atoms.

  9. Analyzing Cost Efficient Production Behavior Under Economies of Scope : A Nonparametric Methodology

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2006-01-01

    In designing a production model for firms that generate multiple outputs, we take as a starting point that such multi-output production refers to economies of scope, which in turn originate from joint input use and input externalities. We provide a nonparametric characterization of cost efficient

  10. Nonparametric Monitoring for Geotechnical Structures Subject to Long-Term Environmental Change

    Directory of Open Access Journals (Sweden)

    Hae-Bum Yun

    2011-01-01

    Full Text Available A nonparametric, data-driven methodology of monitoring for geotechnical structures subject to long-term environmental change is discussed. Avoiding physical assumptions or excessive simplification of the monitored structures, the nonparametric monitoring methodology presented in this paper provides reliable performance-related information particularly when the collection of sensor data is limited. For the validation of the nonparametric methodology, a field case study was performed using a full-scale retaining wall, which had been monitored for three years using three tilt gauges. Using the very limited sensor data, it is demonstrated that important performance-related information, such as drainage performance and sensor damage, could be disentangled from significant daily, seasonal and multiyear environmental variations. Extensive literature review on recent developments of parametric and nonparametric data processing techniques for geotechnical applications is also presented.

  11. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  12. Notes on the Implementation of Non-Parametric Statistics within the Westinghouse Realistic Large Break LOCA Evaluation Model (ASTRUM)

    International Nuclear Information System (INIS)

    Frepoli, Cesare; Oriani, Luca

    2006-01-01

    In recent years, non-parametric or order statistics methods have been widely used to assess the impact of the uncertainties within Best-Estimate LOCA evaluation models. The bounding of the uncertainties is achieved with a direct Monte Carlo sampling of the uncertainty attributes, with the minimum trial number selected to 'stabilize' the estimation of the critical output values (peak cladding temperature (PCT), local maximum oxidation (LMO), and core-wide oxidation (CWO A non-parametric order statistics uncertainty analysis was recently implemented within the Westinghouse Realistic Large Break LOCA evaluation model, also referred to as 'Automated Statistical Treatment of Uncertainty Method' (ASTRUM). The implementation or interpretation of order statistics in safety analysis is not fully consistent within the industry. This has led to an extensive public debate among regulators and researchers which can be found in the open literature. The USNRC-approved Westinghouse method follows a rigorous implementation of the order statistics theory, which leads to the execution of 124 simulations within a Large Break LOCA analysis. This is a solid approach which guarantees that a bounding value (at 95% probability) of the 95 th percentile for each of the three 10 CFR 50.46 ECCS design acceptance criteria (PCT, LMO and CWO) is obtained. The objective of this paper is to provide additional insights on the ASTRUM statistical approach, with a more in-depth analysis of pros and cons of the order statistics and of the Westinghouse approach in the implementation of this statistical methodology. (authors)

  13. Report in the Energy and Intensity Frontiers, and Theoretical at Northwestern University

    Energy Technology Data Exchange (ETDEWEB)

    Velasco, Mayda [Northwestern Univ., Evanston, IL (United States); Schmitt, Michael [Northwestern Univ., Evanston, IL (United States); deGouvea, Andre [Northwestern Univ., Evanston, IL (United States); Low, Ian [Northwestern Univ., Evanston, IL (United States); Petriello, Frank [Northwestern Univ., Evanston, IL (United States); Schellman, Heidi [Northwestern Univ., Evanston, IL (United States)

    2016-03-31

    The Northwestern (NU) Particle Physics (PP) group involved in this report is active on all the following priority areas: Energy and Intensity Frontiers. The group is lead by 2 full profs. in experimental physics (Schmitt and Velasco), 3 full profs. in theoretical physics (de Gouvea, Low and Petriello), and Heidi Schellman who is now at Oregon State. Low and Petriello hold joint appointments with the HEP Division at Argonne National Laboratory. The theoretical PP research focuses on different aspects of PP phenomenology. de Gouvea dedicates a large fraction of his research efforts to understanding the origin of neutrino masses, neutrino properties and uncovering other new phenomena, and investigating connections between neutrino physics and other aspects of PP. Low works on Higgs physics as well as new theories beyond the Standard Model. Petriello pursues a research program in precision QCD and its associated collider phenomenology. The main goal of this effort is to improve the Standard Model predictions for important LHC observables in order to enable discoveries of new physics. In recent years, the emphasis on experimental PP at NU has been in collider physics. NU expands its efforts in new directions in both the Intensity and the Cosmic Frontiers (not discussed in this report). In the Intensity Frontier, Schmitt has started a new effort on Mu2e. He was accepted as a collaborator in April 2015 and is identified with important projects. In the Energy Frontier, Hahn, Schmitt and Velasco continue to have a significant impact and expanded their CMS program to include R&D for the real-time L1 tracking trigger and the high granularity calorimeter needed for the high-luminosity LHC. Hahn is supported by an independent DOE Career Award and his work will not be discussed in this document. The NU analysis effort includes searches for rare and forbidden decays of the Higgs bosons, Z boson, top quark, dark matter and other physics beyond the standard model topics. Four

  14. The Center for Frontiers of Subsurface Energy Security (A 'Life at the Frontiers of Energy Research' contest entry from the 2011 Energy Frontier Research Centers (EFRCs) Summit and Forum)

    International Nuclear Information System (INIS)

    Pope, Gary A.

    2011-01-01

    'The Center for Frontiers of Subsurface Energy Security (CFSES)' was submitted to the 'Life at the Frontiers of Energy Research' video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. CFSES is directed by Gary A. Pope at the University of Texas at Austin and partners with Sandia National Laboratories. The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges.

  15. Mass Modeling of Frontier Fields Cluster MACS J1149.5+2223 Using Strong and Weak Lensing

    Science.gov (United States)

    Finney, Emily Quinn; Bradač, Maruša; Huang, Kuang-Han; Hoag, Austin; Morishita, Takahiro; Schrabback, Tim; Treu, Tommaso; Borello Schmidt, Kasper; Lemaux, Brian C.; Wang, Xin; Mason, Charlotte

    2018-05-01

    We present a gravitational-lensing model of MACS J1149.5+2223 using ultra-deep Hubble Frontier Fields imaging data and spectroscopic redshifts from HST grism and Very Large Telescope (VLT)/MUSE spectroscopic data. We create total mass maps using 38 multiple images (13 sources) and 608 weak-lensing galaxies, as well as 100 multiple images of 31 star-forming regions in the galaxy that hosts supernova Refsdal. We find good agreement with a range of recent models within the HST field of view. We present a map of the ratio of projected stellar mass to total mass (f ⋆) and find that the stellar mass fraction for this cluster peaks on the primary BCG. Averaging within a radius of 0.3 Mpc, we obtain a value of ={0.012}-0.003+0.004, consistent with other recent results for this ratio in cluster environments, though with a large global error (up to δf ⋆ = 0.005) primarily due to the choice of IMF. We compare values of f ⋆ and measures of star formation efficiency for this cluster to other Hubble Frontier Fields clusters studied in the literature, finding that MACS1149 has a higher stellar mass fraction than these other clusters but a star formation efficiency typical of massive clusters.

  16. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  17. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    International Nuclear Information System (INIS)

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv

    2013-01-01

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 ≤ r ≤ 700 pc. The profile for r ≥ 20 pc is well fit by a power law with slope α = –1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  18. A Bayesian nonparametric approach to causal inference on quantiles.

    Science.gov (United States)

    Xu, Dandan; Daniels, Michael J; Winterstein, Almut G

    2018-02-25

    We propose a Bayesian nonparametric approach (BNP) for causal inference on quantiles in the presence of many confounders. In particular, we define relevant causal quantities and specify BNP models to avoid bias from restrictive parametric assumptions. We first use Bayesian additive regression trees (BART) to model the propensity score and then construct the distribution of potential outcomes given the propensity score using a Dirichlet process mixture (DPM) of normals model. We thoroughly evaluate the operating characteristics of our approach and compare it to Bayesian and frequentist competitors. We use our approach to answer an important clinical question involving acute kidney injury using electronic health records. © 2018, The International Biometric Society.

  19. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  20. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  1. Adaptive nonparametric estimation for L\\'evy processes observed at low frequency

    OpenAIRE

    Kappus, Johanna

    2013-01-01

    This article deals with adaptive nonparametric estimation for L\\'evy processes observed at low frequency. For general linear functionals of the L\\'evy measure, we construct kernel estimators, provide upper risk bounds and derive rates of convergence under regularity assumptions. Our focus lies on the adaptive choice of the bandwidth, using model selection techniques. We face here a non-standard problem of model selection with unknown variance. A new approach towards this problem is proposed, ...

  2. Frontiers in fusion research

    CERN Document Server

    Kikuchi, Mitsuru

    2011-01-01

    Frontiers in Fusion Research provides a systematic overview of the latest physical principles of fusion and plasma confinement. It is primarily devoted to the principle of magnetic plasma confinement, that has been systematized through 50 years of fusion research. Frontiers in Fusion Research begins with an introduction to the study of plasma, discussing the astronomical birth of hydrogen energy and the beginnings of human attempts to harness the Sun's energy for use on Earth. It moves on to chapters that cover a variety of topics such as: * charged particle motion, * plasma kinetic theory, *

  3. Nonparametric estimation in an "illness-death" model when all transition times are interval censored

    DEFF Research Database (Denmark)

    Frydman, Halina; Gerds, Thomas; Grøn, Randi

    2013-01-01

    We develop nonparametric maximum likelihood estimation for the parameters of an irreversible Markov chain on states {0,1,2} from the observations with interval censored times of 0 → 1, 0 → 2 and 1 → 2 transitions. The distinguishing aspect of the data is that, in addition to all transition times ...

  4. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Energy Frontier Research Center Materials Science of Actinides (A 'Life at the Frontiers of Energy Research' contest entry from the 2011 Energy Frontier Research Centers (EFRCs) Summit and Forum)

    International Nuclear Information System (INIS)

    Burns, Peter

    2011-01-01

    'Energy Frontier Research Center Materials Science of Actinides' was submitted by the EFRC for Materials Science of Actinides (MSA) to the 'Life at the Frontiers of Energy Research' video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. MSA is directed by Peter Burns at the University of Notre Dame, and is a partnership of scientists from ten institutions.The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges.

  6. Estimation of the lifetime distribution of mechatronic systems in the presence of a covariate: A comparison among parametric, semiparametric and nonparametric models

    International Nuclear Information System (INIS)

    Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang

    2015-01-01

    In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed

  7. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Efficient Discovery of De-identification Policies Through a Risk-Utility Frontier.

    Science.gov (United States)

    Xia, Weiyi; Heatherly, Raymond; Ding, Xiaofeng; Li, Jiuyong; Malin, Bradley

    2013-01-01

    Modern information technologies enable organizations to capture large quantities of person-specific data while providing routine services. Many organizations hope, or are legally required, to share such data for secondary purposes (e.g., validation of research findings) in a de-identified manner. In previous work, it was shown de-identification policy alternatives could be modeled on a lattice, which could be searched for policies that met a prespecified risk threshold (e.g., likelihood of re-identification). However, the search was limited in several ways. First, its definition of utility was syntactic - based on the level of the lattice - and not semantic - based on the actual changes induced in the resulting data. Second, the threshold may not be known in advance. The goal of this work is to build the optimal set of policies that trade-off between privacy risk (R) and utility (U), which we refer to as a R-U frontier. To model this problem, we introduce a semantic definition of utility, based on information theory, that is compatible with the lattice representation of policies. To solve the problem, we initially build a set of policies that define a frontier. We then use a probability-guided heuristic to search the lattice for policies likely to update the frontier. To demonstrate the effectiveness of our approach, we perform an empirical analysis with the Adult dataset of the UCI Machine Learning Repository. We show that our approach can construct a frontier closer to optimal than competitive approaches by searching a smaller number of policies. In addition, we show that a frequently followed de-identification policy (i.e., the Safe Harbor standard of the HIPAA Privacy Rule) is suboptimal in comparison to the frontier discovered by our approach.

  9. Testing a parametric function against a nonparametric alternative in IV and GMM settings

    DEFF Research Database (Denmark)

    Gørgens, Tue; Wurtz, Allan

    This paper develops a specification test for functional form for models identified by moment restrictions, including IV and GMM settings. The general framework is one where the moment restrictions are specified as functions of data, a finite-dimensional parameter vector, and a nonparametric real ...

  10. Nonparametric Efficiency Testing of Asian Stock Markets Using Weekly Data

    OpenAIRE

    CORNELIS A. LOS

    2004-01-01

    The efficiency of speculative markets, as represented by Fama's 1970 fair game model, is tested on weekly price index data of six Asian stock markets - Hong Kong, Indonesia, Malaysia, Singapore, Taiwan and Thailand - using Sherry's (1992) non-parametric methods. These scientific testing methods were originally developed to analyze the information processing efficiency of nervous systems. In particular, the stationarity and independence of the price innovations are tested over ten years, from ...

  11. kruX: matrix-based non-parametric eQTL discovery.

    Science.gov (United States)

    Qi, Jianlong; Asl, Hassan Foroughi; Björkegren, Johan; Michoel, Tom

    2014-01-14

    The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.

  12. Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios

    OpenAIRE

    Müller, Ulrich A; Bürgi, Roland; Dacorogna, Michel M

    2004-01-01

    The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dep...

  13. Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers

    International Nuclear Information System (INIS)

    Baten, M. A.; Maznah, M. K.; Razamin, R.; Jastini, M. J.

    2014-01-01

    Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005–2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this study used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005–2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model

  14. Evaluating Kuala Lumpur stock exchange oriented bank performance with stochastic frontiers

    Energy Technology Data Exchange (ETDEWEB)

    Baten, M. A.; Maznah, M. K.; Razamin, R.; Jastini, M. J. [School of Quantitative Sciences, Universiti Utara Malaysia 06010, Sintok, Kedah (Malaysia)

    2014-12-04

    Banks play an essential role in the economic development and banks need to be efficient; otherwise, they may create blockage in the process of development in any country. The efficiency of banks in Malaysia is important and should receive greater attention. This study formulated an appropriate stochastic frontier model to investigate the efficiency of banks which are traded on Kuala Lumpur Stock Exchange (KLSE) market during the period 2005–2009. All data were analyzed to obtain the maximum likelihood method to estimate the parameters of stochastic production. Unlike the earlier studies which use balance sheet and income statements data, this study used market data as the input and output variables. It was observed that banks listed in KLSE exhibited a commendable overall efficiency level of 96.2% during 2005–2009 hence suggesting minimal input waste of 3.8%. Among the banks, the COMS (Cimb Group Holdings) bank is found to be highly efficient with a score of 0.9715 and BIMB (Bimb Holdings) bank is noted to have the lowest efficiency with a score of 0.9582. The results also show that Cobb-Douglas stochastic frontier model with truncated normal distributional assumption is preferable than Translog stochastic frontier model.

  15. Evolution of grid-wide access to database resident information in ATLAS using Frontier

    CERN Document Server

    Barberis, D; The ATLAS collaboration; de Stefano, J; Dewhurst, A L; Dykstra, D; Front, D

    2012-01-01

    The ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond just user analysis and subsyste...

  16. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  17. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  18. JILA Science | Exploring the frontiers of physics

    Science.gov (United States)

    print logo Main menu Research Research Areas Research Highlights JILA Discoveries JILA Physics Frontier Institutes Give to JILA Search form Search Search Advanced JILA Sites: JILA Physics Frontier Center JILA Molecular Physics Biophysics Chemical Physics Laser Physics Nanoscience Precision Measurement Quantum

  19. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  20. Digital spectral analysis parametric, non-parametric and advanced methods

    CERN Document Server

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  1. The Canadian experience in frontier environmental protection

    International Nuclear Information System (INIS)

    Jones, G.H.

    1991-01-01

    Early Canadian frontier exploration (from 1955 onshore and from 1966 for offshore drilling) caused insignificant public concern. The 1967-1968 Torrey Canyon Tanker and Santa Barbara disasters roused public opinion and governments. In Canada, 1969-1970 Arctic gas blowouts, a tanker disaster, and damage to the 'Manhattan' exacerbated concerns and resulted in new environmental regulatory constraints. From 1970, the Arctic Petroleum Operations Association learned to operate safely with environmental responsibility. It studied physical environment for design criteria, and the biological and human environment to ameliorate impact. APOA's research projects covered sea-ice, permafrost, sea-bottom, oil-spills, bird and mammal migration, fish habitat, food chains, oceanography, meteorology, hunters'/trappers' harvests, etc. In 1971 Eastcoast Petroleum Operators' Association and Alaska Oil and Gas Association followed APOA's cooperative research model. EPOA stressed icebergs and fisheries. Certain research was handled by the Canadian Offshore Oil Spill Research Association. By the mid-1980s these associations had undertaken $70,000,000 of environmental oriented research, with equivalent additional work by member companies on specific needs and similar sums by Federal agencies often working with industry on complementary research. The frontier associations then merged with the Canadian Petroleum Association, already active environmentally in western Canada. Working with government and informing environmental interest groups, the public, natives, and local groups, most Canadian frontier petroleum operations proceeded with minimal delay and environmental disturbance

  2. Speaker Linking and Applications using Non-Parametric Hashing Methods

    Science.gov (United States)

    2016-09-08

    nonparametric estimate of a multivariate density function,” The Annals of Math- ematical Statistics , vol. 36, no. 3, pp. 1049–1051, 1965. [9] E. A. Patrick...Speaker Linking and Applications using Non-Parametric Hashing Methods† Douglas Sturim and William M. Campbell MIT Lincoln Laboratory, Lexington, MA...with many approaches [1, 2]. For this paper, we focus on using i-vectors [2], but the methods apply to any embedding. For the task of speaker QBE and

  3. Frontier differences and the global malmquist index

    DEFF Research Database (Denmark)

    Asmild, Mette

    2015-01-01

    This chapter reviews different ways of comparing the efficiency frontiers for subgroups within a data set, specifically program efficiency, the metatechnology (or technology gap) ratio and the global frontier difference index. The latter is subsequently used to define a global Malmquist index...

  4. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  5. Nonparametric Identification of Glucose-Insulin Process in IDDM Patient with Multi-meal Disturbance

    Science.gov (United States)

    Bhattacharjee, A.; Sutradhar, A.

    2012-12-01

    Modern close loop control for blood glucose level in a diabetic patient necessarily uses an explicit model of the process. A fixed parameter full order or reduced order model does not characterize the inter-patient and intra-patient parameter variability. This paper deals with a frequency domain nonparametric identification of the nonlinear glucose-insulin process in an insulin dependent diabetes mellitus patient that captures the process dynamics in presence of uncertainties and parameter variations. An online frequency domain kernel estimation method has been proposed that uses the input-output data from the 19th order first principle model of the patient in intravenous route. Volterra equations up to second order kernels with extended input vector for a Hammerstein model are solved online by adaptive recursive least square (ARLS) algorithm. The frequency domain kernels are estimated using the harmonic excitation input data sequence from the virtual patient model. A short filter memory length of M = 2 was found sufficient to yield acceptable accuracy with lesser computation time. The nonparametric models are useful for closed loop control, where the frequency domain kernels can be directly used as the transfer function. The validation results show good fit both in frequency and time domain responses with nominal patient as well as with parameter variations.

  6. A non-parametric framework for estimating threshold limit values

    Directory of Open Access Journals (Sweden)

    Ulm Kurt

    2005-11-01

    Full Text Available Abstract Background To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives. Methods We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis. Results In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak. Conclusion The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.

  7. Analysing the length of care episode after hip fracture: a nonparametric and a parametric Bayesian approach.

    Science.gov (United States)

    Riihimäki, Jaakko; Sund, Reijo; Vehtari, Aki

    2010-06-01

    Effective utilisation of limited resources is a challenge for health care providers. Accurate and relevant information extracted from the length of stay distributions is useful for management purposes. Patient care episodes can be reconstructed from the comprehensive health registers, and in this paper we develop a Bayesian approach to analyse the length of care episode after a fractured hip. We model the large scale data with a flexible nonparametric multilayer perceptron network and with a parametric Weibull mixture model. To assess the performances of the models, we estimate expected utilities using predictive density as a utility measure. Since the model parameters cannot be directly compared, we focus on observables, and estimate the relevances of patient explanatory variables in predicting the length of stay. To demonstrate how the use of the nonparametric flexible model is advantageous for this complex health care data, we also study joint effects of variables in predictions, and visualise nonlinearities and interactions found in the data.

  8. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  9. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  10. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  11. Application of nonparametric statistic method for DNBR limit calculation

    International Nuclear Information System (INIS)

    Dong Bo; Kuang Bo; Zhu Xuenong

    2013-01-01

    Background: Nonparametric statistical method is a kind of statistical inference method not depending on a certain distribution; it calculates the tolerance limits under certain probability level and confidence through sampling methods. The DNBR margin is one important parameter of NPP design, which presents the safety level of NPP. Purpose and Methods: This paper uses nonparametric statistical method basing on Wilks formula and VIPER-01 subchannel analysis code to calculate the DNBR design limits (DL) of 300 MW NPP (Nuclear Power Plant) during the complete loss of flow accident, simultaneously compared with the DL of DNBR through means of ITDP to get certain DNBR margin. Results: The results indicate that this method can gain 2.96% DNBR margin more than that obtained by ITDP methodology. Conclusions: Because of the reduction of the conservation during analysis process, the nonparametric statistical method can provide greater DNBR margin and the increase of DNBR margin is benefited for the upgrading of core refuel scheme. (authors)

  12. Multi-sample nonparametric treatments comparison in medical ...

    African Journals Online (AJOL)

    Multi-sample nonparametric treatments comparison in medical follow-up study with unequal observation processes through simulation and bladder tumour case study. P. L. Tan, N.A. Ibrahim, M.B. Adam, J. Arasan ...

  13. Nonparametric regression using the concept of minimum energy

    International Nuclear Information System (INIS)

    Williams, Mike

    2011-01-01

    It has recently been shown that an unbinned distance-based statistic, the energy, can be used to construct an extremely powerful nonparametric multivariate two sample goodness-of-fit test. An extension to this method that makes it possible to perform nonparametric regression using multiple multivariate data sets is presented in this paper. The technique, which is based on the concept of minimizing the energy of the system, permits determination of parameters of interest without the need for parametric expressions of the parent distributions of the data sets. The application and performance of this new method is discussed in the context of some simple example analyses.

  14. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  15. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  16. Snowmass Computing Frontier: Computing for the Cosmic Frontier, Astrophysics, and Cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, A. [Univ. of Washington, Seattle, WA (United States); Habib, S. [Argonne National Lab. (ANL), Lemont, IL (United States); Szalay, A. [Johns Hopkins Univ., Baltimore, MD (United States); Borrill, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fuller, G. [Univ. of California, San Diego, CA (United States); Gnedin, N. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Heitmann, K. [Argonne National Lab. (ANL), Lemont, IL (United States); Jacobs, D. [Arizona State Univ., Tempe, AZ (United States); Lamb, D. [Univ. of Chicago, IL (United States); Mezzacappa, T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Messer, B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Myers, S. [National Radio Astronomy Observatory, Socorro, NM (United States); Nord, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Nugent, P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); O' Shea, B. [Michigan State Univ., East Lansing, MI (United States); Ricker, P. [Univ. of Illinois, Urbana-Champaign, IL (United States); Schneider, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-12

    This document presents (off-line) computing requrements and challenges for Cosmic Frontier science, covering the areas of data management, analysis, and simulations. We invite contributions to extend the range of covered topics and to enhance the current descriptions.

  17. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    Science.gov (United States)

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  18. Driving Style Analysis Using Primitive Driving Patterns With Bayesian Nonparametric Approaches

    OpenAIRE

    Wang, Wenshuo; Xi, Junqiang; Zhao, Ding

    2017-01-01

    Analysis and recognition of driving styles are profoundly important to intelligent transportation and vehicle calibration. This paper presents a novel driving style analysis framework using the primitive driving patterns learned from naturalistic driving data. In order to achieve this, first, a Bayesian nonparametric learning method based on a hidden semi-Markov model (HSMM) is introduced to extract primitive driving patterns from time series driving data without prior knowledge of the number...

  19. Using Stochastic Frontier Analysis to Analyze Adjustment Costs and Investment Utilization

    DEFF Research Database (Denmark)

    Olsen, Jakob Vesterlund; Henningsen, Arne

    of additional inputs and an initially incomplete investment utilization results in an output level that is temporarily not at its maximum. We estimate an output distance function as a stochastic "Efficiency Effects Frontier" model (Battese & Coelli 1995), where the estimated technical inefficiencies...... are explained with current and lagged investments, farm size, age of the farmer, and interaction terms between these variables. Furthermore, we derive the formula for calculating the marginal effects on technical efficiency for "Efficiency Effects Frontier" models so that we can calculate the (marginal) effect...... of current and past investments on technical efficiency, which we interpret as adjustment costs and temporary incomplete investment utilization. We apply this methodology to a large panel data set of Danish pig producers with 9,281 observations between 1996 and 2008. The results show that investments have...

  20. A panel data parametric frontier technique for measuring total-factor energy efficiency: An application to Japanese regions

    International Nuclear Information System (INIS)

    Honma, Satoshi; Hu, Jin-Li

    2014-01-01

    Using the stochastic frontier analysis model, we estimate TFEE (total-factor energy efficiency) scores for 47 regions across Japan during the years 1996–2008. We extend the cross-sectional stochastic frontier model proposed by Zhou et al. (2012) to panel data models and add environmental variables. The results provide not only the TFEE scores, in which statistical noise is taken into account, but also the determinants of inefficiency. The three stochastic TFEE scores are compared with a TFEE score derived using data envelopment analysis. The four TFEE scores are highly correlated with one another. For the inefficiency estimates, higher manufacturing industry shares and wholesale and retail trade shares correspond to lower TFEE scores. - Highlights: • This study estimates total-factor energy efficiency of Japanese regions using the stochastic frontier analysis model. • Determinants of inefficiency are also estimated. • The higher the manufacturing share and wholesale and retail trade share, the lower the energy efficiency

  1. Estimating cost efficiency of Turkish commercial banks under unobserved heterogeneity with stochastic frontier models

    Directory of Open Access Journals (Sweden)

    Hakan Gunes

    2016-12-01

    Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.

  2. Strategic Military Colonisation: The Cape Eastern Frontier 1806–1872

    African Journals Online (AJOL)

    The Cape Eastern Frontier of South Africa offers a fascinating insight into British military strategy as well as colonial development. The Eastern Frontier was for over 100 years a very turbulent frontier. It was the area where the four main population groups (the Dutch, the British, the Xhosa and the Khoikhoi) met, and in many ...

  3. [The Probabilistic Efficiency Frontier: A Value Assessment of Treatment Options in Hepatitis C].

    Science.gov (United States)

    Mühlbacher, Axel C; Sadler, Andrew

    2017-06-19

    Background The German Institute for Quality and Efficiency in Health Care (IQWiG) recommends the concept of the efficiency frontier to assess health care interventions. The efficiency frontier supports regulatory decisions on reimbursement prices for the appropriate allocation of health care resources. Until today this cost-benefit assessment framework has only been applied on the basis of individual patient-relevant endpoints. This contradicts the reality of a multi-dimensional patient benefit. Objective The objective of this study was to illustrate the operationalization of multi-dimensional benefit considering the uncertainty in clinical effects and preference data in order to calculate the efficiency of different treatment options for hepatitis C (HCV). This case study shows how methodological challenges could be overcome in order to use the efficiency frontier for economic analysis and health care decision-making. Method The operationalization of patient benefit was carried out on several patient-relevant endpoints. Preference data from a discrete choice experiment (DCE) study and clinical data based on clinical trials, which reflected the patient and the clinical perspective, respectively, were used for the aggregation of an overall benefit score. A probabilistic efficiency frontier was constructed in a Monte Carlo simulation with 10000 random draws. Patient-relevant endpoints were modeled with a beta distribution and preference data with a normal distribution. The assessment of overall benefit and costs provided information about the adequacy of the treatment prices. The parameter uncertainty was illustrated by the price-acceptability-curve and the net monetary benefit. Results Based on the clinical and preference data in Germany, the interferon-free treatment options proved to be efficient for the current price level. The interferon-free therapies of the latest generation achieved a positive net cost-benefit. Within the decision model, these therapies

  4. Frontiers, territoriality and tensions in bordering spaces

    Directory of Open Access Journals (Sweden)

    María Eugenia Comerci

    2012-01-01

    Full Text Available The expansión of the agricultural frontier in the Argentine pampas implied a re-valuation of "bordering" spaces, which were considered "marginal" by capital. This paper aims at interpreting the socio-territorial impact -from both a material and a symbolic level- being caused by the expansión of the productive, business-profile [agricultural and oil] frontier in the center-west of the province of La Pampa. With the interpretative approach provided by qualitative methodologies, we intend to analyze -in a case study- how these frontier expansión processes altered and re-defined the social arena between the years 2000 and 2010, the social construction of the space and the power relations in Chos Malal

  5. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    Science.gov (United States)

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Nonparametric predictive inference in statistical process control

    NARCIS (Netherlands)

    Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.

    2000-01-01

    New methods for statistical process control are presented, where the inferences have a nonparametric predictive nature. We consider several problems in process control in terms of uncertainties about future observable random quantities, and we develop inferences for these random quantities hased on

  7. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  8. New Frontiers in Passive and Active Nanoantennas

    DEFF Research Database (Denmark)

    Arslanagic, Samel; Ziolkowski, Richard W.

    2017-01-01

    The articles included in this special section focus on several recent advances in the field of passive and active nanoantennas that employ not only traditional based realizations but also their new frontiers.......The articles included in this special section focus on several recent advances in the field of passive and active nanoantennas that employ not only traditional based realizations but also their new frontiers....

  9. Energy not the only frontier

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-11-15

    While the push for big new machines to explore high energy frontiers makes the headlines, other avenues for physics progress are still being actively explored. To reflect these efforts, theorists and experimenters from the experiments committees for CERN's two major existing machines - the PS Proton Synchrotron and the SPS Super Proton Synchrotron – joined forces in study groups to look at long term physics perspectives. As one experimenter put it, 'there are frontiers of high complexity and high precision as well as high energy'. The groups' findings were aired at a special joint open meeting of the two committees at CERN on 31 August and 1 September.

  10. Nonparametric conditional predictive regions for time series

    NARCIS (Netherlands)

    de Gooijer, J.G.; Zerom Godefay, D.

    2000-01-01

    Several nonparametric predictors based on the Nadaraya-Watson kernel regression estimator have been proposed in the literature. They include the conditional mean, the conditional median, and the conditional mode. In this paper, we consider three types of predictive regions for these predictors — the

  11. MUSE integral-field spectroscopy towards the Frontier Fields Cluster Abell S1063

    DEFF Research Database (Denmark)

    Karman, W.; Caputi, K. I.; Grillo, C.

    2015-01-01

    We present the first observations of the Frontier Fields Cluster Abell S1063 taken with the newly commissioned Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph. Because of the relatively large field of view (1 arcmin^2), MUSE is ideal to simultaneously target multiple galaxies...... the cluster, we find 17 galaxies at higher redshift, including three previously unknown Lyman-alpha emitters at z>3, and five multiply-lensed galaxies. We report the detection of a new z=4.113 multiply lensed galaxy, with images that are consistent with lensing model predictions derived for the Frontier...... of scientific topics that can be addressed with a single MUSE pointing. We conclude that MUSE is a very efficient instrument to observe galaxy clusters, enabling their mass modelling, and to perform a blind search for high-redshift galaxies....

  12. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  13. A nonparametric approach to medical survival data: Uncertainty in the context of risk in mortality analysis

    International Nuclear Information System (INIS)

    Janurová, Kateřina; Briš, Radim

    2014-01-01

    Medical survival right-censored data of about 850 patients are evaluated to analyze the uncertainty related to the risk of mortality on one hand and compare two basic surgery techniques in the context of risk of mortality on the other hand. Colorectal data come from patients who underwent colectomy in the University Hospital of Ostrava. Two basic surgery operating techniques are used for the colectomy: either traditional (open) or minimally invasive (laparoscopic). Basic question arising at the colectomy operation is, which type of operation to choose to guarantee longer overall survival time. Two non-parametric approaches have been used to quantify probability of mortality with uncertainties. In fact, complement of the probability to one, i.e. survival function with corresponding confidence levels is calculated and evaluated. First approach considers standard nonparametric estimators resulting from both the Kaplan–Meier estimator of survival function in connection with Greenwood's formula and the Nelson–Aalen estimator of cumulative hazard function including confidence interval for survival function as well. The second innovative approach, represented by Nonparametric Predictive Inference (NPI), uses lower and upper probabilities for quantifying uncertainty and provides a model of predictive survival function instead of the population survival function. The traditional log-rank test on one hand and the nonparametric predictive comparison of two groups of lifetime data on the other hand have been compared to evaluate risk of mortality in the context of mentioned surgery techniques. The size of the difference between two groups of lifetime data has been considered and analyzed as well. Both nonparametric approaches led to the same conclusion, that the minimally invasive operating technique guarantees the patient significantly longer survival time in comparison with the traditional operating technique

  14. The Final Frontier

    DEFF Research Database (Denmark)

    Baron, Christian

    2017-01-01

    in living conditions, where neglect or reckless behavior may have fatal consequences. Exploring the consequences of such behavior in Tom Godwin’s short story ‘The Cold Equations’ (1954) as well as Ridley Scott’s film, Alien (1979), it argues that such ‘frontier situations’ warrant a change in the general...

  15. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    Science.gov (United States)

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  16. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  17. Panel data nonparametric estimation of production risk and risk preferences

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    approaches for obtaining firm-specific measures of risk attitudes. We found that Polish dairy farmers are risk averse regarding production risk and price uncertainty. According to our results, Polish dairy farmers perceive the production risk as being more significant than the risk related to output price......We apply nonparametric panel data kernel regression to investigate production risk, out-put price uncertainty, and risk attitudes of Polish dairy farms based on a firm-level unbalanced panel data set that covers the period 2004–2010. We compare different model specifications and different...

  18. Frontiers in Time Series and Financial Econometrics

    OpenAIRE

    Ling, S.; McAleer, M.J.; Tong, H.

    2015-01-01

    __Abstract__ Two of the fastest growing frontiers in econometrics and quantitative finance are time series and financial econometrics. Significant theoretical contributions to financial econometrics have been made by experts in statistics, econometrics, mathematics, and time series analysis. The purpose of this special issue of the journal on “Frontiers in Time Series and Financial Econometrics” is to highlight several areas of research by leading academics in which novel methods have contrib...

  19. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.; Lombard, F.

    2012-01-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

  20. The determinants of cost efficiency of hydroelectric generating plants: A random frontier approach

    International Nuclear Information System (INIS)

    Barros, Carlos P.; Peypoch, Nicolas

    2007-01-01

    This paper analyses the technical efficiency in the hydroelectric generating plants of a main Portuguese electricity enterprise EDP (Electricity of Portugal) between 1994 and 2004, investigating the role played by increase in competition and regulation. A random cost frontier method is adopted. A translog frontier model is used and the maximum likelihood estimation technique is employed to estimate the empirical model. We estimate the efficiency scores and decompose the exogenous variables into homogeneous and heterogeneous. It is concluded that production and capacity are heterogeneous, signifying that the hydroelectric generating plants are very distinct and therefore any energy policy should take into account this heterogeneity. It is also concluded that competition, rather than regulation, plays the key role in increasing hydroelectric plant efficiency

  1. A parametric interpretation of Bayesian Nonparametric Inference from Gene Genealogies: Linking ecological, population genetics and evolutionary processes.

    Science.gov (United States)

    Ponciano, José Miguel

    2017-11-22

    Using a nonparametric Bayesian approach Palacios and Minin (2013) dramatically improved the accuracy, precision of Bayesian inference of population size trajectories from gene genealogies. These authors proposed an extension of a Gaussian Process (GP) nonparametric inferential method for the intensity function of non-homogeneous Poisson processes. They found that not only the statistical properties of the estimators were improved with their method, but also, that key aspects of the demographic histories were recovered. The authors' work represents the first Bayesian nonparametric solution to this inferential problem because they specify a convenient prior belief without a particular functional form on the population trajectory. Their approach works so well and provides such a profound understanding of the biological process, that the question arises as to how truly "biology-free" their approach really is. Using well-known concepts of stochastic population dynamics, here I demonstrate that in fact, Palacios and Minin's GP model can be cast as a parametric population growth model with density dependence and environmental stochasticity. Making this link between population genetics and stochastic population dynamics modeling provides novel insights into eliciting biologically meaningful priors for the trajectory of the effective population size. The results presented here also bring novel understanding of GP as models for the evolution of a trait. Thus, the ecological principles foundation of Palacios and Minin (2013)'s prior adds to the conceptual and scientific value of these authors' inferential approach. I conclude this note by listing a series of insights brought about by this connection with Ecology. Copyright © 2017 The Author. Published by Elsevier Inc. All rights reserved.

  2. Energy not the only frontier

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    While the push for big new machines to explore high energy frontiers makes the headlines, other avenues for physics progress are still being actively explored. To reflect these efforts, theorists and experimenters from the experiments committees for CERN's two major existing machines - the PS Proton Synchrotron and the SPS Super Proton Synchrotron – joined forces in study groups to look at long term physics perspectives. As one experimenter put it, 'there are frontiers of high complexity and high precision as well as high energy'. The groups' findings were aired at a special joint open meeting of the two committees at CERN on 31 August and 1 September

  3. US residential energy demand and energy efficiency: A stochastic demand frontier approach

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.

    2012-01-01

    This paper estimates a US frontier residential aggregate energy demand function using panel data for 48 ‘states’ over the period 1995 to 2007 using stochastic frontier analysis (SFA). Utilizing an econometric energy demand model, the (in)efficiency of each state is modeled and it is argued that this represents a measure of the inefficient use of residential energy in each state (i.e. ‘waste energy’). This underlying efficiency for the US is therefore observed for each state as well as the relative efficiency across the states. Moreover, the analysis suggests that energy intensity is not necessarily a good indicator of energy efficiency, whereas by controlling for a range of economic and other factors, the measure of energy efficiency obtained via this approach is. This is a novel approach to model residential energy demand and efficiency and it is arguably particularly relevant given current US energy policy discussions related to energy efficiency.

  4. Bioactive glasses: Frontiers and challenges

    Directory of Open Access Journals (Sweden)

    Larry L. Hench

    2015-11-01

    Full Text Available Bioactive glasses were discovered in 1969 and provided for the first time an alternative to nearly inert implant materials. Bioglass formed a rapid, strong and stable bond with host tissues. This article examines the frontiers of research crossed to achieve clinical use of bioactive glasses and glass-ceramics. In the 1980’s it was discovered that bioactive glasses could be used in particulate form to stimulate osteogenesis, which thereby led to the concept of regeneration of tissues. Later, it was discovered that the dissolution ions from the glasses behaved like growth factors, providing signals to the cells. This article summarizes the frontiers of knowledge crossed during four eras of development of bioactive glasses that have led from concept of bioactivity to widespread clinical and commercial use, with emphasis on the first composition, 45S5 Bioglass®. The four eras are: a discovery; b clinical application; c tissue regeneration; and d innovation. Questions still to be answered for the fourth era are included to stimulate innovation in the field and exploration of new frontiers that can be the basis for a general theory of bioactive stimulation of regeneration of tissues and application to numerous clinical needs.

  5. Non-parametric smoothing of experimental data

    International Nuclear Information System (INIS)

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  6. Multivariate nonparametric regression and visualization with R and applications to finance

    CERN Document Server

    Klemelä, Jussi

    2014-01-01

    A modern approach to statistical learning and its applications through visualization methods With a unique and innovative presentation, Multivariate Nonparametric Regression and Visualization provides readers with the core statistical concepts to obtain complete and accurate predictions when given a set of data. Focusing on nonparametric methods to adapt to the multiple types of data generatingmechanisms, the book begins with an overview of classification and regression. The book then introduces and examines various tested and proven visualization techniques for learning samples and functio

  7. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...

  8. Frontier commodification

    DEFF Research Database (Denmark)

    Bennike, Rune Bolding

    2017-01-01

    In the contemporary global imagination, Darjeeling typically figures on two accounts: as a unique tourism site replete with colonial heritage and picturesque nature, and as the productive origin for some of the world's most exclusive teas. In this commodified and consumable form, Darjeeling is part...... of material and representational interventions, I uncover the particular assemblage of government and capital that enabled this transformation and highlight its potential resonances with contemporary cases of frontier commodification in South Asia and beyond....

  9. Fermilab a laboratory at the frontier of research

    CERN Document Server

    Gillies, James D

    2002-01-01

    Since its foundation in 1967, creeping urbanization has taken away some of Fermilab's remoteness, but the famous buffalo still roam, and farm buildings evocative of frontier America dot the landscape - appropriately for a laboratory at the high-energy frontier of modern research. Topics discussed are the Tevatron, detector upgrades, the neutrino programme, Fermilab and the LHC and the non-accelerator programme.

  10. Multi-Directional Non-Parametric Analysis of Agricultural Efficiency

    DEFF Research Database (Denmark)

    Balezentis, Tomas

    This thesis seeks to develop methodologies for assessment of agricultural efficiency and employ them to Lithuanian family farms. In particular, we focus on three particular objectives throughout the research: (i) to perform a fully non-parametric analysis of efficiency effects, (ii) to extend...... to the Multi-Directional Efficiency Analysis approach when the proposed models were employed to analyse empirical data of Lithuanian family farm performance, we saw substantial differences in efficiencies associated with different inputs. In particular, assets appeared to be the least efficiently used input...... relative to labour, intermediate consumption and land (in some cases land was not treated as a discretionary input). These findings call for further research on relationships among financial structure, investment decisions, and efficiency in Lithuanian family farms. Application of different techniques...

  11. Urban frontiers in the global struggle for capital gains

    Directory of Open Access Journals (Sweden)

    Peter Mörtenböck

    2018-05-01

    Full Text Available This article examines different ways in which finance models have become the ruling mode of spatializing relationships, arguing that the ongoing convergence of economic and spatial investment has transformed our environments into heavily contested ‘financescapes’. First, it reflects upon architecture’s capacity to give both material and symbolic form to these processes and considers the impacts this has on the emergence of novel kinds of urban investment frontiers, including luxury brand real estate, free zones, private cities, and urban innovation hubs. Focusing on speculative urban developments in Morocco and the United Arab Emirates, the article then highlights the performative dimension of such building programs: how architectural capital is put to work by actively performing the frontiers of future development. Physically staking out future financial gains, this mode of operation is today becoming increasingly manifested in urban crowdfunding schemes. We argue that, far from promoting new models of civic participation, such schemes are functioning as a testbed for speculation around new patterns of spatial production in which architecture acts less as the flagstaff of capital than as a capital system in itself.

  12. Nonparametric Integrated Agrometeorological Drought Monitoring: Model Development and Application

    Science.gov (United States)

    Zhang, Qiang; Li, Qin; Singh, Vijay P.; Shi, Peijun; Huang, Qingzhong; Sun, Peng

    2018-01-01

    Drought is a major natural hazard that has massive impacts on the society. How to monitor drought is critical for its mitigation and early warning. This study proposed a modified version of the multivariate standardized drought index (MSDI) based on precipitation, evapotranspiration, and soil moisture, i.e., modified multivariate standardized drought index (MMSDI). This study also used nonparametric joint probability distribution analysis. Comparisons were done between standardized precipitation evapotranspiration index (SPEI), standardized soil moisture index (SSMI), MSDI, and MMSDI, and real-world observed drought regimes. Results indicated that MMSDI detected droughts that SPEI and/or SSMI failed to do. Also, MMSDI detected almost all droughts that were identified by SPEI and SSMI. Further, droughts detected by MMSDI were similar to real-world observed droughts in terms of drought intensity and drought-affected area. When compared to MMSDI, MSDI has the potential to overestimate drought intensity and drought-affected area across China, which should be attributed to exclusion of the evapotranspiration components from estimation of drought intensity. Therefore, MMSDI is proposed for drought monitoring that can detect agrometeorological droughts. Results of this study provide a framework for integrated drought monitoring in other regions of the world and can help to develop drought mitigation.

  13. Teaching Nonparametric Statistics Using Student Instrumental Values.

    Science.gov (United States)

    Anderson, Jonathan W.; Diddams, Margaret

    Nonparametric statistics are often difficult to teach in introduction to statistics courses because of the lack of real-world examples. This study demonstrated how teachers can use differences in the rankings and ratings of undergraduate and graduate values to discuss: (1) ipsative and normative scaling; (2) uses of the Mann-Whitney U-test; and…

  14. Greek perceptions of frontier in Magna Graecia: literature and archaeology in dialogue

    Directory of Open Access Journals (Sweden)

    Airton POLLINI

    2013-07-01

    Full Text Available The paper deals with Greek perceptions of frontier in Magna Graecia, from a historical archaeological, contextual standpoint. Considering the complex relationship between literary and archaeological evidence, the paper uses as a case study the frontier in Southern Italy, discussing the subjective frontier perceptions by Greeks and Natives in interaction.

  15. A Bayesian stochastic frontier analysis of Chinese fossil-fuel electricity generation companies

    International Nuclear Information System (INIS)

    Chen, Zhongfei; Barros, Carlos Pestana; Borges, Maria Rosa

    2015-01-01

    This paper analyses the technical efficiency of Chinese fossil-fuel electricity generation companies from 1999 to 2011, using a Bayesian stochastic frontier model. The results reveal that efficiency varies among the fossil-fuel electricity generation companies that were analysed. We also focus on the factors of size, location, government ownership and mixed sources of electricity generation for the fossil-fuel electricity generation companies, and also examine their effects on the efficiency of these companies. Policy implications are derived. - Highlights: • We analyze the efficiency of 27 quoted Chinese fossil-fuel electricity generation companies during 1999–2011. • We adopt a Bayesian stochastic frontier model taking into consideration the identified heterogeneity. • With reform background in Chinese energy industry, we propose four hypotheses and check their influence on efficiency. • Big size, coastal location, government control and hydro energy sources all have increased costs

  16. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  17. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo

    2013-06-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric and based on the asymptotic distribution of the empirical copula process.We perform simulation experiments to evaluate our test and conclude that our method is reliable and powerful for assessing common assumptions on the structure of copulas, particularly when the sample size is moderately large. We illustrate our testing approach on two datasets. © 2013 American Statistical Association.

  18. Markov switching mean-variance frontier dynamics: theory and international evidence

    OpenAIRE

    M. Guidolin; F. Ria

    2010-01-01

    It is well-known that regime switching models are able to capture the presence of rich non-linear patterns in the joint distribution of asset returns. After reviewing key concepts and technical issues related to specifying, estimating, and using multivariate Markov switching models in financial applications, in this paper we map the presence of regimes in means, variances, and covariances of asset returns into explicit dynamics of the Markowitz mean-variance frontier. In particular, we show b...

  19. About pioneer frontiers

    Directory of Open Access Journals (Sweden)

    Hervé Théry

    2014-09-01

    Full Text Available The geographer Pierre Monbeig wrote texts far ahead of his time, who deserve to be read today as they are useful in understanding today's pioneering frontiers. These are nowadays much further north than in his time, in the Amazon, contested between advocates of environmental protection and production of meat and grains, which has appeared on the southern ,lank of Brazilian Amazon, in Mato Grosso.

  20. Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis

    Science.gov (United States)

    Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi

    To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.

  1. The technology gap and efficiency measure in WEC countries: Application of the hybrid meta frontier model

    International Nuclear Information System (INIS)

    Chiu, Yung-Ho; Lee, Jen-Hui; Lu, Ching-Cheng; Shyu, Ming-Kuang; Luo, Zhengying

    2012-01-01

    This study develops the hybrid meta frontier DEA model for which inputs are distinguished into radial inputs that change proportionally and non-radial inputs that change non-proportionally, in order to measure the technical efficiency and technology gap ratios (TGR) of four different regions: Asia, Africa, America, and Europe. This paper selects 87 countries that are members of the World Energy Council from 2005 to 2007. The input variables are industry and population, while the output variances are gross domestic product (GDP) and the amount of fossil-fuel CO 2 emissions. The result shows that countries’ efficiency ranking among their own region presents more implied volatility. In view of the Technology Gap Ratio, Europe is the most efficient of any region, but during the same period, Asia has a lower efficiency than other regions. Finally, regions with higher industry (or GDP) might not have higher efficiency from 2005 to 2007. And higher CO 2 emissions or population also might not mean lower efficiency for other regions. In addition, Brazil is not OECD member, but it is higher efficiency than other OECD members in emerging countries case. OECD countries are better efficiency than non-OECD countries and Europe is higher than Asia to control CO 2 emissions. If non-OECD countries or Asia countries could reach the best efficiency score, they should try to control CO 2 emissions. - Highlights: ► The new meta frontier Model for evaluating the efficiency and technology gap ratios. ► Higher CO 2 emissions might not lower efficiency than any other regions, like Europe. ► Asia’s output and CO 2 emissions simultaneously increased and lower of its efficiency. ► Non-OECD or Asia countries should control CO 2 emissions to reach best efficiency score.

  2. The Frontier Fields: Survey Design and Initial Results

    Energy Technology Data Exchange (ETDEWEB)

    Lotz, J. M.; Koekemoer, A.; Grogin, N.; Mack, J.; Anderson, J.; Avila, R.; Barker, E. A.; Borncamp, D.; Durbin, M.; Gunning, H.; Hilbert, B.; Jenkner, H.; Khandrika, H.; Levay, Z.; Lucas, R. A.; MacKenty, J.; Ogaz, S. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Coe, D.; Capak, P.; Brammer, G., E-mail: lotz@stsci.edu [European Space Agency/Space Telescope Science Institute, 3700 Sam Martin Drive, Baltimore, MD 21218 (United States); and others

    2017-03-01

    What are the faintest distant galaxies we can see with the Hubble Space Telescope ( HST ) now, before the launch of the James Webb Space Telescope ? This is the challenge taken up by the Frontier Fields, a Director’s discretionary time campaign with HST and the Spitzer Space Telescope to see deeper into the universe than ever before. The Frontier Fields combines the power of HST and Spitzer with the natural gravitational telescopes of massive high-magnification clusters of galaxies to produce the deepest observations of clusters and their lensed galaxies ever obtained. Six clusters—Abell 2744, MACSJ0416.1-2403, MACSJ0717.5+3745, MACSJ1149.5+2223, Abell S1063, and Abell 370—have been targeted by the HST ACS/WFC and WFC3/IR cameras with coordinated parallel fields for over 840 HST orbits. The parallel fields are the second-deepest observations thus far by HST with 5 σ point-source depths of ∼29th ABmag. Galaxies behind the clusters experience typical magnification factors of a few, with small regions magnified by factors of 10–100. Therefore, the Frontier Field cluster HST images achieve intrinsic depths of ∼30–33 mag over very small volumes. Spitzer has obtained over 1000 hr of Director’s discretionary imaging of the Frontier Field cluster and parallels in IRAC 3.6 and 4.5 μ m bands to 5 σ point-source depths of ∼26.5, 26.0 ABmag. We demonstrate the exceptional sensitivity of the HST Frontier Field images to faint high-redshift galaxies, and review the initial results related to the primary science goals.

  3. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    Science.gov (United States)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  4. Bayesian nonparametric areal wombling for small-scale maps with an application to urinary bladder cancer data from Connecticut.

    Science.gov (United States)

    Guhaniyogi, Rajarshi

    2017-11-10

    With increasingly abundant spatial data in the form of case counts or rates combined over areal regions (eg, ZIP codes, census tracts, or counties), interest turns to formal identification of difference "boundaries," or barriers on the map, in addition to the estimated statistical map itself. "Boundary" refers to a border that describes vastly disparate outcomes in the adjacent areal units, perhaps caused by latent risk factors. This article focuses on developing a model-based statistical tool, equipped to identify difference boundaries in maps with a small number of areal units, also referred to as small-scale maps. This article proposes a novel and robust nonparametric boundary detection rule based on nonparametric Dirichlet processes, later referred to as Dirichlet process wombling (DPW) rule, by employing Dirichlet process-based mixture models for small-scale maps. Unlike the recently proposed nonparametric boundary detection rules based on false discovery rates, the DPW rule is free of ad hoc parameters, computationally simple, and readily implementable in freely available software for public health practitioners such as JAGS and OpenBUGS and yet provides statistically interpretable boundary detection in small-scale wombling. We offer a detailed simulation study and an application of our proposed approach to a urinary bladder cancer incidence rates dataset between 1990 and 2012 in the 8 counties in Connecticut. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Monitoring coastal marshes biomass with CASI: a comparison of parametric and non-parametric models

    Science.gov (United States)

    Mo, Y.; Kearney, M.

    2017-12-01

    Coastal marshes are important carbon sinks that face multiple natural and anthropogenic stresses. Optical remote sensing is a powerful tool for closely monitoring the biomass of coastal marshes. However, application of hyperspectral sensors on assessing the biomass of diverse coastal marsh ecosystems is limited. This study samples spectral and biophysical data from coastal freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops parametric and non-parametric models for using the Compact Airborne Spectrographic Imager (CASI) to retrieve the marshes' biomass. Linear models and random forest models are developed from simulated CASI data (48 bands, 380-1050 nm, bandwidth 14 nm). Linear models are also developed using narrowband vegetation indices computed from all possible band combinations from the blue, red, and near infrared wavelengths. It is found that the linear models derived from the optimal narrowband vegetation indices provide strong predictions for the marshes' Leaf Area Index (LAI; R2 > 0.74 for ARVI), but not for their Aboveground Green Biomass (AGB; R2 > 0.25). The linear models derived from the simulated CASI data strongly predict the marshes' LAI (R2 = 0.93) and AGB (R2 = 0.71) and have 27 and 30 bands/variables in the final models through stepwise regression, respectively. The random forest models derived from the simulated CASI data also strongly predict the marshes' LAI and AGB (R2 = 0.91 and 0.84, respectively), where the most important variables for predicting LAI are near infrared bands at 784 and 756 nm and for predicting ABG are red bands at 684 and 670 nm. In sum, the random forest model is preferable for assessing coastal marsh biomass using CASI data as it offers high R2 for both LAI and AGB. The superior performance of the random forest model is likely to due to that it fully utilizes the full-spectrum data and makes no assumption of the approximate normality of the sampling population. This study offers solutions

  6. Essays on parametric and nonparametric modeling and estimation with applications to energy economics

    Science.gov (United States)

    Gao, Weiyu

    My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant

  7. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  8. Nonparametric Estimation of Cumulative Incidence Functions for Competing Risks Data with Missing Cause of Failure

    DEFF Research Database (Denmark)

    Effraimidis, Georgios; Dahl, Christian Møller

    In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric...

  9. New insights into the stochastic ray production frontier

    DEFF Research Database (Denmark)

    Henningsen, Arne; Bělín, Matěj; Henningsen, Géraldine

    The stochastic ray production frontier was developed as an alternative to the traditional output distance function to model production processes with multiple inputs and multiple outputs. Its main advantage over the traditional approach is that it can be used when some output quantities of some o...... important than the existing criticisms: taking logarithms of the polar coordinate angles, non-invariance to units of measurement, and ordering of the outputs. We also give some practical advice on how to address the newly raised issues....

  10. New insights into the stochastic ray production frontier

    DEFF Research Database (Denmark)

    Henningsen, Arne; Bělín, Matěj; Henningsen, Geraldine

    2017-01-01

    The stochastic ray production frontier was developed as an alternative to the traditional output distance function to model production processes with multiple inputs and multiple outputs. Its main advantage over the traditional approach is that it can be used when some output quantities of some o...... important than the existing criticisms: taking logarithms of the polar coordinate angles, non-invariance to units of measurement, and ordering of the outputs. We also give some practical advice on how to address the newly raised issues....

  11. Predicting Lung Radiotherapy-Induced Pneumonitis Using a Model Combining Parametric Lyman Probit With Nonparametric Decision Trees

    International Nuclear Information System (INIS)

    Das, Shiva K.; Zhou Sumin; Zhang, Junan; Yin, F.-F.; Dewhirst, Mark W.; Marks, Lawrence B.

    2007-01-01

    Purpose: To develop and test a model to predict for lung radiation-induced Grade 2+ pneumonitis. Methods and Materials: The model was built from a database of 234 lung cancer patients treated with radiotherapy (RT), of whom 43 were diagnosed with pneumonitis. The model augmented the predictive capability of the parametric dose-based Lyman normal tissue complication probability (LNTCP) metric by combining it with weighted nonparametric decision trees that use dose and nondose inputs. The decision trees were sequentially added to the model using a 'boosting' process that enhances the accuracy of prediction. The model's predictive capability was estimated by 10-fold cross-validation. To facilitate dissemination, the cross-validation result was used to extract a simplified approximation to the complicated model architecture created by boosting. Application of the simplified model is demonstrated in two example cases. Results: The area under the model receiver operating characteristics curve for cross-validation was 0.72, a significant improvement over the LNTCP area of 0.63 (p = 0.005). The simplified model used the following variables to output a measure of injury: LNTCP, gender, histologic type, chemotherapy schedule, and treatment schedule. For a given patient RT plan, injury prediction was highest for the combination of pre-RT chemotherapy, once-daily treatment, female gender and lowest for the combination of no pre-RT chemotherapy and nonsquamous cell histologic type. Application of the simplified model to the example cases revealed that injury prediction for a given treatment plan can range from very low to very high, depending on the settings of the nondose variables. Conclusions: Radiation pneumonitis prediction was significantly enhanced by decision trees that added the influence of nondose factors to the LNTCP formulation

  12. Modern nonparametric, robust and multivariate methods festschrift in honour of Hannu Oja

    CERN Document Server

    Taskinen, Sara

    2015-01-01

    Written by leading experts in the field, this edited volume brings together the latest findings in the area of nonparametric, robust and multivariate statistical methods. The individual contributions cover a wide variety of topics ranging from univariate nonparametric methods to robust methods for complex data structures. Some examples from statistical signal processing are also given. The volume is dedicated to Hannu Oja on the occasion of his 65th birthday and is intended for researchers as well as PhD students with a good knowledge of statistics.

  13. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  14. Frontier Fields: Bringing the Distant Universe into View

    Science.gov (United States)

    Eisenhamer, Bonnie; Lawton, Brandon L.; Summers, Frank; Ryer, Holly

    2014-06-01

    The Frontier Fields is a multi-cycle program of six deep-field observations of strong-lensing galaxy clusters that will be taken in parallel with six deep “blank fields.” The three-year long collaborative program centers on observations from NASA’s Great Observatories, who will team up to look deeper into the universe than ever before, and potentially uncover galaxies that are as much as 100 times fainter than what the telescopes can typically see. Because of the unprecedented views of the universe that will be achieved, the Frontier Fields science program is ideal for informing audiences about scientific advances and topics in STEM. For example, the program provides an opportunity to look back on the history of deep field observations and how they changed (and continue to change) astronomy, while exploring the ways astronomers approach big science problems. As a result, the Space Telescope Science Institute’s Office of Public Outreach has initiated an education and public outreach (E/PO) project to follow the progress of the Frontier Fields program - providing a behind-the-scenes perspective of this observing initiative. This poster will highlight the goals of the Frontier Fields E/PO project and the cost-effective approach being used to bring the program’s results to both the public and educational audiences.

  15. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method

    Science.gov (United States)

    Kenderi, Gábor; Fidlin, Alexander

    2014-12-01

    The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.

  17. Bayesian Nonparametric Measurement of Factor Betas and Clustering with Application to Hedge Fund Returns

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2016-03-01

    Full Text Available We define a dynamic and self-adjusting mixture of Gaussian Graphical Models to cluster financial returns, and provide a new method for extraction of nonparametric estimates of dynamic alphas (excess return and betas (to a choice set of explanatory factors in a multivariate setting. This approach, as well as the outputs, has a dynamic, nonstationary and nonparametric form, which circumvents the problem of model risk and parametric assumptions that the Kalman filter and other widely used approaches rely on. The by-product of clusters, used for shrinkage and information borrowing, can be of use to determine relationships around specific events. This approach exhibits a smaller Root Mean Squared Error than traditionally used benchmarks in financial settings, which we illustrate through simulation. As an illustration, we use hedge fund index data, and find that our estimated alphas are, on average, 0.13% per month higher (1.6% per year than alphas estimated through Ordinary Least Squares. The approach exhibits fast adaptation to abrupt changes in the parameters, as seen in our estimated alphas and betas, which exhibit high volatility, especially in periods which can be identified as times of stressful market events, a reflection of the dynamic positioning of hedge fund portfolio managers.

  18. Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem

    Science.gov (United States)

    Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.

    2017-11-01

    The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.

  19. A GEOMETRICALLY SUPPORTED z ∼ 10 CANDIDATE MULTIPLY IMAGED BY THE HUBBLE FRONTIER FIELDS CLUSTER A2744

    International Nuclear Information System (INIS)

    Zitrin, Adi; Zheng, Wei; Huang, Xingxing; Ford, Holland; Broadhurst, Tom; Moustakas, John; Lam, Daniel; Lim, Jeremy; Shu, Xinwen; Diego, Jose M.; Bauer, Franz E.; Infante, Leopoldo; Kelson, Daniel D.; Molino, Alberto

    2014-01-01

    The deflection angles of lensed sources increase with their distance behind a given lens. We utilize this geometric effect to corroborate the z phot ≅ 9.8 photometric redshift estimate of a faint near-IR dropout, triply imaged by the massive galaxy cluster A2744 in deep Hubble Frontier Fields images. The multiple images of this source follow the same symmetry as other nearby sets of multiple images that bracket the critical curves and have well-defined redshifts (up to z spec ≅ 3.6), but with larger deflection angles, indicating that this source must lie at a higher redshift. Similarly, our different parametric and non-parametric lens models all require this object be at z ≳ 4, with at least 95% confidence, thoroughly excluding the possibility of lower-redshift interlopers. To study the properties of this source, we correct the two brighter images for their magnifications, leading to a star formation rate of ∼0.3 M ☉  yr –1 , a stellar mass of ∼4 × 10 7 M ☉ , and an age of ≲ 220 Myr (95% confidence). The intrinsic apparent magnitude is 29.9 AB (F160W), and the rest-frame UV (∼1500 Å) absolute magnitude is M UV, AB = –17.6. This corresponds to ∼0.1 L z=8 ∗ (∼0.2 L z=10 ∗ , adopting dM*/dz ∼ 0.45), making this candidate one of the least luminous galaxies discovered at z ∼ 10

  20. A GEOMETRICALLY SUPPORTED z ∼ 10 CANDIDATE MULTIPLY IMAGED BY THE HUBBLE FRONTIER FIELDS CLUSTER A2744

    Energy Technology Data Exchange (ETDEWEB)

    Zitrin, Adi [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Zheng, Wei; Huang, Xingxing; Ford, Holland [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Broadhurst, Tom [Department of Theoretical Physics, University of Basque Country UPV/EHU, Bilbao (Spain); Moustakas, John [Department of Physics and Astronomy, Siena College, Loudonville, NY 12211 (United States); Lam, Daniel; Lim, Jeremy [Department of Physics, The University of Hong Kong, Pokfulam Road (Hong Kong); Shu, Xinwen [CEA Saclay, DSM/Irfu/Service d' Astrophysique, Orme des Merisiers, F-91191 Gif-sur-Yvette Cedex (France); Diego, Jose M. [Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, E-39005 Santander (Spain); Bauer, Franz E.; Infante, Leopoldo [Pontificia Universidad Católica de Chile, Instituto de Astrofísica, Santiago 22 (Chile); Kelson, Daniel D. [The Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Molino, Alberto, E-mail: adizitrin@gmail.com [Instituto de Astrofísica de Andalucía - CSIC, Glorieta de la Astronomía, s/n. E-18008, Granada (Spain)

    2014-09-20

    The deflection angles of lensed sources increase with their distance behind a given lens. We utilize this geometric effect to corroborate the z {sub phot} ≅ 9.8 photometric redshift estimate of a faint near-IR dropout, triply imaged by the massive galaxy cluster A2744 in deep Hubble Frontier Fields images. The multiple images of this source follow the same symmetry as other nearby sets of multiple images that bracket the critical curves and have well-defined redshifts (up to z {sub spec} ≅ 3.6), but with larger deflection angles, indicating that this source must lie at a higher redshift. Similarly, our different parametric and non-parametric lens models all require this object be at z ≳ 4, with at least 95% confidence, thoroughly excluding the possibility of lower-redshift interlopers. To study the properties of this source, we correct the two brighter images for their magnifications, leading to a star formation rate of ∼0.3 M {sub ☉} yr{sup –1}, a stellar mass of ∼4 × 10{sup 7} M {sub ☉}, and an age of ≲ 220 Myr (95% confidence). The intrinsic apparent magnitude is 29.9 AB (F160W), and the rest-frame UV (∼1500 Å) absolute magnitude is M {sub UV,} {sub AB} = –17.6. This corresponds to ∼0.1 L{sub z=8}{sup ∗} (∼0.2 L{sub z=10}{sup ∗}, adopting dM*/dz ∼ 0.45), making this candidate one of the least luminous galaxies discovered at z ∼ 10.

  1. THE EVOLUTION OF ROMAN FRONTIER CONCEPT AND POLICY

    Directory of Open Access Journals (Sweden)

    George Cupcea

    2015-03-01

    Full Text Available The Roman power is, ideologically, infinite in time and space. Nevertheless, the Roman state had experienced a wide variety of territorial limits, evolving in time and space, more or less throughout a millennium. If at first the Roman state, limited to Rome metropolitan area, later to the Italian peninsula, was easily defensible, beginning with the heavy expansion, also came trouble. The Romans, always innovating, find solutions for the fortification of the contact zones with the Barbarians. The Roman frontier concept was fundamentally different from the modern one. If the defence of Roman possessions was obviously priority, the border should remain an open ensemble, allowing for the free circulation of people and goods, some of the fundamental Roman rights. The peak of Roman expansion, 2nd century A.D. brings also the maximum development of the Empire frontier. Dacia overlaps widely chronologically on this trend, this being one of the reasons for one of the most complex frontier system in the Empire.

  2. Nonparametric estimation for censored mixture data with application to the Cooperative Huntington's Observational Research Trial.

    Science.gov (United States)

    Wang, Yuanjia; Garcia, Tanya P; Ma, Yanyuan

    2012-01-01

    This work presents methods for estimating genotype-specific distributions from genetic epidemiology studies where the event times are subject to right censoring, the genotypes are not directly observed, and the data arise from a mixture of scientifically meaningful subpopulations. Examples of such studies include kin-cohort studies and quantitative trait locus (QTL) studies. Current methods for analyzing censored mixture data include two types of nonparametric maximum likelihood estimators (NPMLEs) which do not make parametric assumptions on the genotype-specific density functions. Although both NPMLEs are commonly used, we show that one is inefficient and the other inconsistent. To overcome these deficiencies, we propose three classes of consistent nonparametric estimators which do not assume parametric density models and are easy to implement. They are based on the inverse probability weighting (IPW), augmented IPW (AIPW), and nonparametric imputation (IMP). The AIPW achieves the efficiency bound without additional modeling assumptions. Extensive simulation experiments demonstrate satisfactory performance of these estimators even when the data are heavily censored. We apply these estimators to the Cooperative Huntington's Observational Research Trial (COHORT), and provide age-specific estimates of the effect of mutation in the Huntington gene on mortality using a sample of family members. The close approximation of the estimated non-carrier survival rates to that of the U.S. population indicates small ascertainment bias in the COHORT family sample. Our analyses underscore an elevated risk of death in Huntington gene mutation carriers compared to non-carriers for a wide age range, and suggest that the mutation equally affects survival rates in both genders. The estimated survival rates are useful in genetic counseling for providing guidelines on interpreting the risk of death associated with a positive genetic testing, and in facilitating future subjects at risk

  3. Comparative analysis of automotive paints by laser induced breakdown spectroscopy and nonparametric permutation tests

    International Nuclear Information System (INIS)

    McIntee, Erin; Viglino, Emilie; Rinke, Caitlin; Kumor, Stephanie; Ni Liqiang; Sigman, Michael E.

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) has been investigated for the discrimination of automobile paint samples. Paint samples from automobiles of different makes, models, and years were collected and separated into sets based on the color, presence or absence of effect pigments and the number of paint layers. Twelve LIBS spectra were obtained for each paint sample, each an average of a five single shot 'drill down' spectra from consecutive laser ablations in the same spot on the sample. Analyses by a nonparametric permutation test and a parametric Wald test were performed to determine the extent of discrimination within each set of paint samples. The discrimination power and Type I error were assessed for each data analysis method. Conversion of the spectral intensity to a log-scale (base 10) resulted in a higher overall discrimination power while observing the same significance level. Working on the log-scale, the nonparametric permutation tests gave an overall 89.83% discrimination power with a size of Type I error being 4.44% at the nominal significance level of 5%. White paint samples, as a group, were the most difficult to differentiate with the power being only 86.56% followed by 95.83% for black paint samples. Parametric analysis of the data set produced lower discrimination (85.17%) with 3.33% Type I errors, which is not recommended for both theoretical and practical considerations. The nonparametric testing method is applicable across many analytical comparisons, with the specific application described here being the pairwise comparison of automotive paint samples.

  4. Non-Parametric Analysis of Rating Transition and Default Data

    DEFF Research Database (Denmark)

    Fledelius, Peter; Lando, David; Perch Nielsen, Jens

    2004-01-01

    We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move b...

  5. Adaptive nonparametric Bayesian inference using location-scale mixture priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2010-01-01

    We study location-scale mixture priors for nonparametric statistical problems, including multivariate regression, density estimation and classification. We show that a rate-adaptive procedure can be obtained if the prior is properly constructed. In particular, we show that adaptation is achieved if

  6. Non-parametric analysis of production efficiency of poultry egg ...

    African Journals Online (AJOL)

    Non-parametric analysis of production efficiency of poultry egg farmers in Delta ... analysis of factors affecting the output of poultry farmers showed that stock ... should be put in place for farmers to learn the best farm practices carried out on the ...

  7. Managerial performance and cost efficiency of Japanese local public hospitals: a latent class stochastic frontier model.

    Science.gov (United States)

    Besstremyannaya, Galina

    2011-09-01

    The paper explores the link between managerial performance and cost efficiency of 617 Japanese general local public hospitals in 1999-2007. Treating managerial performance as unobservable heterogeneity, the paper employs a panel data stochastic cost frontier model with latent classes. Financial parameters associated with better managerial performance are found to be positively significant in explaining the probability of belonging to the more efficient latent class. The analysis of latent class membership was consistent with the conjecture that unobservable technological heterogeneity reflected in the existence of the latent classes is related to managerial performance. The findings may support the cause for raising efficiency of Japanese local public hospitals by enhancing the quality of management. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Power System Extreme Event Detection: The VulnerabilityFrontier

    Energy Technology Data Exchange (ETDEWEB)

    Lesieutre, Bernard C.; Pinar, Ali; Roy, Sandip

    2007-10-17

    In this work we apply graph theoretic tools to provide aclose bound on a frontier relating the number of line outages in a gridto the power disrupted by the outages. This frontier describes theboundary of a space relating the possible severity of a disturbance interms of power disruption, from zero to some maximum on the boundary, tothe number line outages involved in the event. We present the usefulnessof this analysis with a complete analysis of a 30 bus system, and presentresults for larger systems.

  9. 40 CFR 81.24 - Niagara Frontier Intrastate Air Quality Control Region.

    Science.gov (United States)

    2010-07-01

    ... Quality Control Region. 81.24 Section 81.24 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Air Quality Control Regions § 81.24 Niagara Frontier Intrastate Air Quality Control Region. The Niagara Frontier Intrastate Air Quality Control Region (New York) consists of the territorial area...

  10. Re-Emerging Frontiers: Postcolonial Theory and Historical Archaeology of the Borderlands

    DEFF Research Database (Denmark)

    Naum, Magdalena

    2010-01-01

    processes in borderlands that were created by colonial empires. They are also an apt way to conceptualize relationships in frontiers that lacked colonial stigma. To illustrate this point, two different historical examples of borderlands are scrutinized in this paper: the medieval frontier region...

  11. Frontiers of finance: evolution and efficient markets.

    Science.gov (United States)

    Farmer, J D; Lo, A W

    1999-08-31

    In this review article, we explore several recent advances in the quantitative modeling of financial markets. We begin with the Efficient Markets Hypothesis and describe how this controversial idea has stimulated a number of new directions of research, some focusing on more elaborate mathematical models that are capable of rationalizing the empirical facts, others taking a completely different tack in rejecting rationality altogether. One of the most promising directions is to view financial markets from a biological perspective and, specifically, within an evolutionary framework in which markets, instruments, institutions, and investors interact and evolve dynamically according to the "law" of economic selection. Under this view, financial agents compete and adapt, but they do not necessarily do so in an optimal fashion. Evolutionary and ecological models of financial markets is truly a new frontier whose exploration has just begun.

  12. Portfolio Evaluation Based on Efficient Frontier Superiority Using Center of Gravity

    Directory of Open Access Journals (Sweden)

    Omar Samat

    2010-01-01

    Full Text Available Investing in portfolio of assets is the best way to reduce the investment risk. The most desired portfolio can be obtained when investors chose to invest in the portfolios that lay on the portfolio’s efficient frontier. However, the superiorities of the portfolios are difficult to differentiate especially when the efficient frontier curves are overlapping. This paper proposed the portfolio’s efficient frontier center of gravity (CoG and Euclidean distance to identify its superiority. A sample of 49 stocks of large-cap and small-cap were used to construct two hypothetical portfolios and its efficient frontiers. The Euclidean distance showed that the large-cap portfolio is superior and having wider feasible solutions compared to the small-cap portfolio. The results of new tool introduced are consistent with the conventional methods. Here the theoretical and practical implications are provided in light of the findings.

  13. Shifting frontiers of transcendence in theology, philosophy and science

    Directory of Open Access Journals (Sweden)

    Cornelius W. du Toit

    2011-04-01

    Full Text Available This article dealt cursorily with developments in theology, philosophy and the sciences that have contributed to what one might call horizontal transcendence. The premise is that humans have evolved into beings that are wired for transcendence. Transcendence is described in terms of the metaphor of frontiers and frontier posts. Although the frontiers of transcendence shift according to the insights, understanding and needs of every epoch and world view, it remains transcendent, even in its immanent mode. Diverse perceptions of that frontier normally coexist in every era and we can only discern a posteriori which was the dominant one. Frontiers are fixed with reference to the epistemologies, notions of the subject and power structures of a given era. From a theological point of view, encounter with the transcendent affords insight, not into the essence of transcendence, but into human self-understanding and understanding of our world. Transcendence enters into the picture when an ordinary human experience acquires a depth and an immediacy that are attributed to an act of God. In philosophy, transcendence evolved from a noumenal metaphysics focused on the object (Plato, via emphasis on the epistemological structure and limits of the knowing subject (Kant and an endeavour to establish a dynamic subject-object dialectics (Hegel, to the assimilation of transcendence into human existence (Heidegger. In the sciences certain developments opened up possibilities for God to act in non-interventionist ways. The limitations of such an approach are considered, as well as promising new departures – and their limitations – in the neurosciences. From all of this I conclude that an immanent-transcendent approach is plausible for our day and age.

  14. Operational Experience with the Frontier System in CMS

    International Nuclear Information System (INIS)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du Ran; Wang Weizhen

    2012-01-01

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.

  15. Operational Experience with the Frontier System in CMS

    Science.gov (United States)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen

    2012-12-01

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.

  16. The Frontier Framework (and its eight Frontier Archetypes): A new conceptual approach to representing staff and patient well-being in health systems.

    Science.gov (United States)

    Baines, Darrin L

    2018-05-04

    This paper proposes a new conceptual framework for jointly analysing the production of staff and patient welfare in health systems. Research to date has identified a direct link between staff and patient well-being. However, until now, no one has produced a unified framework for analysing them concurrently. In response, this paper introduces the "Frontier Framework". The new conceptual framework is applicable to all health systems regardless of their structure or financing. To demonstrate the benefits of its use, an empirical example of the Frontier Framework is constructed using data from the UK's National Health Service. This paper also introduces eight "Frontier Archetypes", which represent common patterns of welfare generation observable in health organisations involved in programmes of change. These archetypes may be used in planning, monitoring or creating narratives about organisational journeys. Copyright © 2018 The Author. Published by Elsevier Ltd.. All rights reserved.

  17. Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2017-06-01

    Full Text Available We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, and each dictionary is used for one cluster’s sparse coding. In this way, the signals in one cluster could be well represented by their corresponding dictionaries. A nonparametric Bayesian dictionary learning method is used to learn the dictionaries, which naturally infers an appropriate dictionary size for each cluster. A uniform quantizer and an adaptive arithmetic coding algorithm are adopted to code the sparse coefficients. With comparisons to other state-of-the art approaches, the effectiveness of the proposed method could be validated in the experiments.

  18. Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R

    Directory of Open Access Journals (Sweden)

    Terrance D. Savitsky

    2016-08-01

    Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot

  19. A mean-variance frontier in discrete and continuous time

    OpenAIRE

    Bekker, Paul A.

    2004-01-01

    The paper presents a mean-variance frontier based on dynamic frictionless investment strategies in continuous time. The result applies to a finite number of risky assets whose price process is given by multivariate geometric Brownian motion with deterministically varying coefficients. The derivation is based on the solution for the frontier in discrete time. Using the same multiperiod framework as Li and Ng (2000), I provide an alternative derivation and an alternative formulation of the solu...

  20. Prior processes and their applications nonparametric Bayesian estimation

    CERN Document Server

    Phadia, Eswar G

    2016-01-01

    This book presents a systematic and comprehensive treatment of various prior processes that have been developed over the past four decades for dealing with Bayesian approach to solving selected nonparametric inference problems. This revised edition has been substantially expanded to reflect the current interest in this area. After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process. It subsequently discusses various neutral to right type processes, including gamma and extended gamma, beta and beta-Stacy processes, and then describes the Chinese Restaurant, Indian Buffet and infinite gamma-Poisson processes, which prove to be very useful in areas such as machine learning, information retrieval and featural modeling. Tailfree and P...

  1. Introducing "Frontiers in Zoology"

    Science.gov (United States)

    Heinze, Jürgen; Tautz, Diethard

    2004-09-29

    As a biological discipline, zoology has one of the longest histories. Today it occasionally appears as though, due to the rapid expansion of life sciences, zoology has been replaced by more or less independent sub-disciplines amongst which exchange is often sparse. However, the recent advance of molecular methodology into "classical" fields of biology, and the development of theories that can explain phenomena on different levels of organisation, has led to a re-integration of zoological disciplines promoting a broader than usual approach to zoological questions. Zoology has re-emerged as an integrative discipline encompassing the most diverse aspects of animal life, from the level of the gene to the level of the ecosystem.The new journal Frontiers in Zoology is the first Open Access journal focussing on zoology as a whole. It aims to represent and re-unite the various disciplines that look at animal life from different perspectives and at providing the basis for a comprehensive understanding of zoological phenomena on all levels of analysis. Frontiers in Zoology provides a unique opportunity to publish high quality research and reviews on zoological issues that will be internationally accessible to any reader at no cost.

  2. Zero- vs. one-dimensional, parametric vs. non-parametric, and confidence interval vs. hypothesis testing procedures in one-dimensional biomechanical trajectory analysis.

    Science.gov (United States)

    Pataky, Todd C; Vanrenterghem, Jos; Robinson, Mark A

    2015-05-01

    Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. CATDAT - A program for parametric and nonparametric categorical data analysis user's manual, Version 1.0

    International Nuclear Information System (INIS)

    Peterson, James R.; Haas, Timothy C.; Lee, Danny C.

    2000-01-01

    Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network

  4. Regime shifts in mean-variance efficient frontiers: some international evidence

    OpenAIRE

    Massimo Guidolin; Federica Ria

    2010-01-01

    Regime switching models have been assuming a central role in financial applications because of their well-known ability to capture the presence of rich non-linear patterns in the joint distribution of asset returns. This paper examines how the presence of regimes in means, variances, and correlations of asset returns translates into explicit dynamics of the Markowitz mean-variance frontier. In particular, the paper shows both theoretically and through an application to international equity po...

  5. Thermoluminescence dosimetry: State-of-the-art and frontiers of future research

    International Nuclear Information System (INIS)

    Horowitz, Y.S.

    2014-01-01

    The state-of-the-art in the use of thermoluminescence for the measurement of energy imparted by ionizing radiation is discussed. Emphasis is on the advantages obtainable by the use of computerized glow curve analysis in (i) quality control, (ii) low dose environmental dosimetry, (iii) medical applications (especially precision) and microdosimetric applications, and (iv) mixed field ionization-density–dosimetry. Possible frontiers of future research are highlighted: (i) vector representation in glow curve analysis, (ii) combined OSL/TL measurements, (iii) detection of sub-ionization electrons, (iv) requirements for new TL materials and (v) theoretical subjects involving kinetic modeling invoking localized/delocalized recombination applied to dose response and track structure theory including creation of defects. - Highlights:: • State of the art in thermoluminescence dosimetry. • Benefits of computerized glow curve deconvolution. • Frontiers of future research:new materials, mixed-field dosimetry. • Localized/delocalized kinetic theory:ionization density dependence. • Kinetic theory:creation of defects:track structure theory

  6. Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin

    Science.gov (United States)

    Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.

    2018-01-01

    Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.

  7. Pesticide use and biodiversity conservation in the Amazonian agricultural frontier.

    Science.gov (United States)

    Schiesari, Luis; Waichman, Andrea; Brock, Theo; Adams, Cristina; Grillitsch, Britta

    2013-06-05

    Agricultural frontiers are dynamic environments characterized by the conversion of native habitats to agriculture. Because they are currently concentrated in diverse tropical habitats, agricultural frontiers are areas where the largest number of species is exposed to hazardous land management practices, including pesticide use. Focusing on the Amazonian frontier, we show that producers have varying access to resources, knowledge, control and reward mechanisms to improve land management practices. With poor education and no technical support, pesticide use by smallholders sharply deviated from agronomical recommendations, tending to overutilization of hazardous compounds. By contrast, with higher levels of technical expertise and resources, and aiming at more restrictive markets, large-scale producers adhered more closely to technical recommendations and even voluntarily replaced more hazardous compounds. However, the ecological footprint increased significantly over time because of increased dosage or because formulations that are less toxic to humans may be more toxic to other biodiversity. Frontier regions appear to be unique in terms of the conflicts between production and conservation, and the necessary pesticide risk management and risk reduction can only be achieved through responsibility-sharing by diverse stakeholders, including governmental and intergovernmental organizations, NGOs, financial institutions, pesticide and agricultural industries, producers, academia and consumers.

  8. Nonparametric estimation for censored mixture data with application to the Cooperative Huntington’s Observational Research Trial

    Science.gov (United States)

    Wang, Yuanjia; Garcia, Tanya P.; Ma, Yanyuan

    2012-01-01

    This work presents methods for estimating genotype-specific distributions from genetic epidemiology studies where the event times are subject to right censoring, the genotypes are not directly observed, and the data arise from a mixture of scientifically meaningful subpopulations. Examples of such studies include kin-cohort studies and quantitative trait locus (QTL) studies. Current methods for analyzing censored mixture data include two types of nonparametric maximum likelihood estimators (NPMLEs) which do not make parametric assumptions on the genotype-specific density functions. Although both NPMLEs are commonly used, we show that one is inefficient and the other inconsistent. To overcome these deficiencies, we propose three classes of consistent nonparametric estimators which do not assume parametric density models and are easy to implement. They are based on the inverse probability weighting (IPW), augmented IPW (AIPW), and nonparametric imputation (IMP). The AIPW achieves the efficiency bound without additional modeling assumptions. Extensive simulation experiments demonstrate satisfactory performance of these estimators even when the data are heavily censored. We apply these estimators to the Cooperative Huntington’s Observational Research Trial (COHORT), and provide age-specific estimates of the effect of mutation in the Huntington gene on mortality using a sample of family members. The close approximation of the estimated non-carrier survival rates to that of the U.S. population indicates small ascertainment bias in the COHORT family sample. Our analyses underscore an elevated risk of death in Huntington gene mutation carriers compared to non-carriers for a wide age range, and suggest that the mutation equally affects survival rates in both genders. The estimated survival rates are useful in genetic counseling for providing guidelines on interpreting the risk of death associated with a positive genetic testing, and in facilitating future subjects at risk

  9. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  10. CMS conditions data access using FroNTier

    International Nuclear Information System (INIS)

    Blumenfeld, Barry; Johns Hopkins U.; Dykstra, David; Lueking, Lee; Wicklund, Eric; Fermilab

    2007-01-01

    The CMS experiment at the LHC has established an infrastructure using the FroNTier framework to deliver conditions (i.e. calibration, alignment, etc.) data to processing clients worldwide. FroNTier is a simple web service approach providing client HTTP access to a central database service. The system for CMS has been developed to work with POOL which provides object relational mapping between the C++ clients and various database technologies. Because of the read only nature of the data, Squid proxy caching servers are maintained near clients and these caches provide high performance data access. Several features have been developed to make the system meet the needs of CMS including careful attention to cache coherency with the central database, and low latency loading required for the operation of the online High Level Trigger. The ease of deployment, stability of operation, and high performance make the FroNTier approach well suited to the GRID environment being used for CMS offline, as well as for the online environment used by the CMS High Level Trigger (HLT). The use of standard software, such as Squid and various monitoring tools, make the system reliable, highly configurable and easily maintained. We describe the architecture, software, deployment, performance, monitoring and overall operational experience for the system

  11. CMS conditions data access using FroNTier

    International Nuclear Information System (INIS)

    Blumenfeld, B; Dykstra, D; Lueking, L; Wicklund, E

    2008-01-01

    The CMS experiment at the LHC has established an infrastructure using the FroNTier framework to deliver conditions (i.e. calibration, alignment, etc.) data to processing clients worldwide. FroNTier is a simple web service approach providing client HTTP access to a central database service. The system for CMS has been developed to work with POOL which provides object relational mapping between the C++ clients and various database technologies. Because of the read only nature of the data, Squid proxy caching servers are maintained near clients and these caches provide high performance data access. Several features have been developed to make the system meet the needs of CMS including careful attention to cache coherency with the central database, and low latency loading required for the operation of the online High Level Trigger. The ease of deployment, stability of operation, and high performance make the FroNTier approach well suited to the GRID environment being used for CMS offline, as well as for the online environment used by the CMS High Level Trigger. The use of standard software, such as Squid and various monitoring tools, makes the system reliable, highly configurable and easily maintained. We describe the architecture, software, deployment, performance, monitoring and overall operational experience for the system

  12. Radial measures of public services deficit for regional allocation of public funds

    OpenAIRE

    Puig, Jaume

    2005-01-01

    The goal of this paper is to present an optimal resource allocation model for the regional allocation of public service inputs. The proposed solution leads to maximise the relative public service availability in regions located below the best availability frontier, subject to exogenous budget restrictions and equality of access for equal need criteria (equity-based notion of regional needs). The construction of non-parametric deficit indicators is proposed for public service availability by a...

  13. The Kernel Mixture Network: A Nonparametric Method for Conditional Density Estimation of Continuous Random Variables

    OpenAIRE

    Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric

    2017-01-01

    This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...

  14. Comparative Study of Parametric and Non-parametric Approaches in Fault Detection and Isolation

    DEFF Research Database (Denmark)

    Katebi, S.D.; Blanke, M.; Katebi, M.R.

    This report describes a comparative study between two approaches to fault detection and isolation in dynamic systems. The first approach uses a parametric model of the system. The main components of such techniques are residual and signature generation for processing and analyzing. The second...... approach is non-parametric in the sense that the signature analysis is only dependent on the frequency or time domain information extracted directly from the input-output signals. Based on these approaches, two different fault monitoring schemes are developed where the feature extraction and fault decision...

  15. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    Science.gov (United States)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  16. Nucleon measurements at the precision frontier

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Carl E. [Physics Department, College of William and Mary, Williamsburg, VA 23187 (United States)

    2013-11-07

    We comment on nucleon measurements at the precision frontier. As examples of what can be learned, we concentrate on three topics, which are parity violating scattering experiments, the proton radius puzzle, and the symbiosis between nuclear and atomic physics.

  17. On the robust nonparametric regression estimation for a functional regressor

    OpenAIRE

    Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias

    2009-01-01

    On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...

  18. Triangles in ROC space: History and theory of "nonparametric" measures of sensitivity and response bias.

    Science.gov (United States)

    Macmillan, N A; Creelman, C D

    1996-06-01

    Can accuracy and response bias in two-stimulus, two-response recognition or detection experiments be measured nonparametrically? Pollack and Norman (1964) answered this question affirmatively for sensitivity, Hodos (1970) for bias: Both proposed measures based on triangular areas in receiver-operating characteristic space. Their papers, and especially a paper by Grier (1971) that provided computing formulas for the measures, continue to be heavily cited in a wide range of content areas. In our sample of articles, most authors described triangle-based measures as making fewer assumptions than measures associated with detection theory. However, we show that statistics based on products or ratios of right triangle areas, including a recently proposed bias index and a not-yetproposed but apparently plausible sensitivity index, are consistent with a decision process based on logistic distributions. Even the Pollack and Norman measure, which is based on non-right triangles, is approximately logistic for low values of sensitivity. Simple geometric models for sensitivity and bias are not nonparametric, even if their implications are not acknowledged in the defining publications.

  19. PARTICLE BEAMS: Frontier course

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-01-15

    Driven by the quest for higher energies and optimal physics conditions, the behaviour of particle beams in accelerators and storage rings is the subject of increasing attention. Thus the second course organized jointly by the US and CERN Accelerator Schools looked towards the frontiers of particle beam knowledge. The programme held at South Padre Island, Texas, from 23-29 October attracted 125 participants including some 35 from Europe.

  20. PARTICLE BEAMS: Frontier course

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Driven by the quest for higher energies and optimal physics conditions, the behaviour of particle beams in accelerators and storage rings is the subject of increasing attention. Thus the second course organized jointly by the US and CERN Accelerator Schools looked towards the frontiers of particle beam knowledge. The programme held at South Padre Island, Texas, from 23-29 October attracted 125 participants including some 35 from Europe

  1. Parametric and nonparametric Granger causality testing: Linkages between international stock markets

    Science.gov (United States)

    De Gooijer, Jan G.; Sivarajasingham, Selliah

    2008-04-01

    This study investigates long-term linear and nonlinear causal linkages among eleven stock markets, six industrialized markets and five emerging markets of South-East Asia. We cover the period 1987-2006, taking into account the on-set of the Asian financial crisis of 1997. We first apply a test for the presence of general nonlinearity in vector time series. Substantial differences exist between the pre- and post-crisis period in terms of the total number of significant nonlinear relationships. We then examine both periods, using a new nonparametric test for Granger noncausality and the conventional parametric Granger noncausality test. One major finding is that the Asian stock markets have become more internationally integrated after the Asian financial crisis. An exception is the Sri Lankan market with almost no significant long-term linear and nonlinear causal linkages with other markets. To ensure that any causality is strictly nonlinear in nature, we also examine the nonlinear causal relationships of VAR filtered residuals and VAR filtered squared residuals for the post-crisis sample. We find quite a few remaining significant bi- and uni-directional causal nonlinear relationships in these series. Finally, after filtering the VAR-residuals with GARCH-BEKK models, we show that the nonparametric test statistics are substantially smaller in both magnitude and statistical significance than those before filtering. This indicates that nonlinear causality can, to a large extent, be explained by simple volatility effects.

  2. Estimation from PET data of transient changes in dopamine concentration induced by alcohol: support for a non-parametric signal estimation method

    Energy Technology Data Exchange (ETDEWEB)

    Constantinescu, C C; Yoder, K K; Normandin, M D; Morris, E D [Department of Radiology, Indiana University School of Medicine, Indianapolis, IN (United States); Kareken, D A [Department of Neurology, Indiana University School of Medicine, Indianapolis, IN (United States); Bouman, C A [Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN (United States); O' Connor, S J [Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN (United States)], E-mail: emorris@iupui.edu

    2008-03-07

    We previously developed a model-independent technique (non-parametric ntPET) for extracting the transient changes in neurotransmitter concentration from paired (rest and activation) PET studies with a receptor ligand. To provide support for our method, we introduced three hypotheses of validation based on work by Endres and Carson (1998 J. Cereb. Blood Flow Metab. 18 1196-210) and Yoder et al (2004 J. Nucl. Med. 45 903-11), and tested them on experimental data. All three hypotheses describe relationships between the estimated free (synaptic) dopamine curves (F{sup DA}(t)) and the change in binding potential ({delta}BP). The veracity of the F{sup DA}(t) curves recovered by nonparametric ntPET is supported when the data adhere to the following hypothesized behaviors: (1) {delta}BP should decline with increasing DA peak time, (2) {delta}BP should increase as the strength of the temporal correlation between F{sup DA}(t) and the free raclopride (F{sup RAC}(t)) curve increases, (3) {delta}BP should decline linearly with the effective weighted availability of the receptor sites. We analyzed regional brain data from 8 healthy subjects who received two [{sup 11}C]raclopride scans: one at rest, and one during which unanticipated IV alcohol was administered to stimulate dopamine release. For several striatal regions, nonparametric ntPET was applied to recover F{sup DA}(t), and binding potential values were determined. Kendall rank-correlation analysis confirmed that the F{sup DA}(t) data followed the expected trends for all three validation hypotheses. Our findings lend credence to our model-independent estimates of F{sup DA}(t). Application of nonparametric ntPET may yield important insights into how alterations in timing of dopaminergic neurotransmission are involved in the pathologies of addiction and other psychiatric disorders.

  3. Nonparametric estimation of benchmark doses in environmental risk assessment

    Science.gov (United States)

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  4. Power of non-parametric linkage analysis in mapping genes contributing to human longevity in long-lived sib-pairs

    DEFF Research Database (Denmark)

    Tan, Qihua; Zhao, J H; Iachine, I

    2004-01-01

    This report investigates the power issue in applying the non-parametric linkage analysis of affected sib-pairs (ASP) [Kruglyak and Lander, 1995: Am J Hum Genet 57:439-454] to localize genes that contribute to human longevity using long-lived sib-pairs. Data were simulated by introducing a recently...... developed statistical model for measuring marker-longevity associations [Yashin et al., 1999: Am J Hum Genet 65:1178-1193], enabling direct power comparison between linkage and association approaches. The non-parametric linkage (NPL) scores estimated in the region harboring the causal allele are evaluated...... in case of a dominant effect. Although the power issue may depend heavily on the true genetic nature in maintaining survival, our study suggests that results from small-scale sib-pair investigations should be referred with caution, given the complexity of human longevity....

  5. Mental Healthcare Delivery in London-Middlesex Ontario - The Next Frontier.

    Science.gov (United States)

    Velji, Karima; Links, Paul

    2016-01-01

    The next frontier for mental healthcare delivery will be focused on three facets of innovation, namely structure, process and outcome. The structure innovation will seek to develop new models of care delivery between the two hospitals and with the community. The process innovation will focus on embedding strategies to adopt a recovery and rehabilitation approach to care delivery. Lastly, the outcome innovation will use system wide quality improvement methods to drive breakthrough performance in mental healthcare.

  6. Statement of Problem of Pareto Frontier Management and Its Solution in the Analysis and Synthesis of Optimal Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2015-01-01

    Full Text Available The article research concerns the multi-criteria optimization (MCO, which assumes that operation quality criteria of the system are independent and specifies a way to improve values of these criteria. Mutual contradiction of some criteria is a major problem in MCO. One of the most important areas of research is to obtain the so-called Pareto - optimal options.The subject of research is Pareto front, also called the Pareto frontier. The article discusses front classifications by its geometric representation for the case of two-criterion task. It presents a mathematical description of the front characteristics using the gradients and their projections. A review of current domestic and foreign literature has revealed that the aim of works in constructing the Pareto frontier is to conduct research in conditions of uncertainty, in the stochastic statement, with no restrictions. A topology both in two- and in three-dimensional case is under consideration. The targets of modern applications are multi-agent systems and groups of players in differential games. However, all considered works have no task to provide an active management of the front.The objective of this article is to discuss the research problem the Pareto frontier in a new production, namely, with the active co-developers of the systems and (or the decision makers (DM in the management of the Pareto frontier. It notes that such formulation differs from the traditionally accepted approach based on the analysis of already existing solutions.The article discusses three ways to describe a quality of the object management system. The first way is to use the direct quality criteria for the model of a closed system as the vibrational level of the General form. The second one is to study a specific two-loop system of an aircraft control using the angular velocity and normal acceleration loops. The third is the use of the integrated quality criteria. In all three cases, the selected criteria are

  7. Non-parametric production analysis of pesticides use in the Netherlands

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Silva, E.

    2004-01-01

    Many previous empirical studies on the productivity of pesticides suggest that pesticides are under-utilized in agriculture despite the general held believe that these inputs are substantially over-utilized. This paper uses data envelopment analysis (DEA) to calculate non-parametric measures of the

  8. The frontier beneath our feet

    Science.gov (United States)

    Grant, Gordon E.; Dietrich, William E.

    2017-04-01

    Following the simple question as to where water goes when it rains leads to one of the most exciting frontiers in earth science: the critical zone—Earth's dynamic skin. The critical zone extends from the top of the vegetation canopy through the soil and down to fresh bedrock and the bottom of the groundwater. Only recently recognized as a distinct zone, it is challenging to study because it is hard to observe directly, and varies widely across biogeoclimatic regions. Yet new ideas, instruments, and observations are revealing surprising and sometimes paradoxical insights, underscoring the value of field campaigns and long-term observatories. These insights bear directly on some of the most pressing societal problems today: maintaining healthy forests, sustaining streamflow during droughts, and restoring productive terrestrial and aquatic ecosystems. The critical zone is critical because it supports all terrestrial life; it is the nexus where water and carbon is cycled, vegetation (hence food) grows, soil develops, landscapes evolve, and we live. No other frontier is so close to home.

  9. Renormalization group critical frontier of the three-dimensional bond-dilute Ising ferromagnet

    International Nuclear Information System (INIS)

    Chao, N.-C.; Schwaccheim, G.; Tsallis, C.

    1981-01-01

    The critical frontier (as well as the thermal type critical exponents) associated to the quenched bond-dilute spin - 1/2 Ising ferromagnet in the simple cubic lattice is approximately calculated within a real space renormalization group framework in two different versions. Both lead to qualitatively satisfactory critical frontiers, although one of them provides an unphysical fixed point (which seem to be related to the three-dimensionality of the system) besides the expected pure ones; its effects tend to disappear for increasingly large clusters. Through an extrapolation procedure the (unknown) critical frontier is approximately located. (Author) [pt

  10. Bayesian nonparametric inference on quantile residual life function: Application to breast cancer data.

    Science.gov (United States)

    Park, Taeyoung; Jeong, Jong-Hyeon; Lee, Jae Won

    2012-08-15

    There is often an interest in estimating a residual life function as a summary measure of survival data. For ease in presentation of the potential therapeutic effect of a new drug, investigators may summarize survival data in terms of the remaining life years of patients. Under heavy right censoring, however, some reasonably high quantiles (e.g., median) of a residual lifetime distribution cannot be always estimated via a popular nonparametric approach on the basis of the Kaplan-Meier estimator. To overcome the difficulties in dealing with heavily censored survival data, this paper develops a Bayesian nonparametric approach that takes advantage of a fully model-based but highly flexible probabilistic framework. We use a Dirichlet process mixture of Weibull distributions to avoid strong parametric assumptions on the unknown failure time distribution, making it possible to estimate any quantile residual life function under heavy censoring. Posterior computation through Markov chain Monte Carlo is straightforward and efficient because of conjugacy properties and partial collapse. We illustrate the proposed methods by using both simulated data and heavily censored survival data from a recent breast cancer clinical trial conducted by the National Surgical Adjuvant Breast and Bowel Project. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Frontiers of quantum Monte Carlo workshop: preface

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  12. Assessing systematic risk in the S&P500 index between 2000 and 2011: A Bayesian nonparametric approach

    OpenAIRE

    Rodríguez, Abel; Wang, Ziwei; Kottas, Athanasios

    2017-01-01

    We develop a Bayesian nonparametric model to assess the effect of systematic risks on multiple financial markets, and apply it to understand the behavior of the S&P500 sector indexes between January 1, 2000 and December 31, 2011. More than prediction, our main goal is to understand the evolution of systematic and idiosyncratic risks in the U.S. economy over this particular time period, leading to novel sector-specific risk indexes. To accomplish this goal, we model the appearance of extreme l...

  13. Compendium of Instrumentation Whitepapers on Frontier Physics Needs for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Lipton, R. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2013-01-01

    Contents of collection of whitepapers include: Operation of Collider Experiments at High Luminosity; Level 1 Track Triggers at HL-LHC; Tracking and Vertex Detectors for a Muon Collider; Triggers for hadron colliders at the energy frontier; ATLAS Upgrade Instrumentation; Instrumentation for the Energy Frontier; Particle Flow Calorimetry for CMS; Noble Liquid Calorimeters; Hadronic dual-readout calorimetry for high energy colliders; Another Detector for the International Linear Collider; e+e- Linear Colliders Detector Requirements and Limitations; Electromagnetic Calorimetry in Project X Experiments The Project X Physics Study; Intensity Frontier Instrumentation; Project X Physics Study Calorimetry Report; Project X Physics Study Tracking Report; The LHCb Upgrade; Neutrino Detectors Working Group Summary; Advanced Water Cherenkov R&D for WATCHMAN; Liquid Argon Time Projection Chamber (LArTPC); Liquid Scintillator Instrumentation for Physics Frontiers; A readout architecture for 100,000 pixel Microwave Kinetic In- ductance Detector array; Instrumentation for New Measurements of the Cosmic Microwave Background polarization; Future Atmospheric and Water Cherenkov ?-ray Detectors; Dark Energy; Can Columnar Recombination Provide Directional Sensitivity in WIMP Search?; Instrumentation Needs for Detection of Ultra-high Energy Neu- trinos; Low Background Materials for Direct Detection of Dark Matter; Physics Motivation for WIMP Dark Matter Directional Detection; Solid Xenon R&D at Fermilab; Ultra High Energy Neutrinos; Instrumentation Frontier: Direct Detection of WIMPs; nEXO detector R&D; Large Arrays of Air Cherenkov Detectors; and Applications of Laser Interferometry in Fundamental Physics Experiments.

  14. Frontiers in Gold Chemistry

    OpenAIRE

    Ahmed A. Mohamed

    2015-01-01

    Basic chemistry of gold tells us that it can bond to sulfur, phosphorous, nitrogen, and oxygen donor ligands. The Frontiers in Gold Chemistry Special Issue covers gold complexes bonded to the different donors and their fascinating applications. This issue covers both basic chemistry studies of gold complexes and their contemporary applications in medicine, materials chemistry, and optical sensors. There is a strong belief that aurophilicity plays a major role in the unending applications of g...

  15. The 2016 Frontiers in Medicinal Chemistry Conference in Bonn.

    Science.gov (United States)

    Müller, Christa E; Thimm, Dominik; Baringhaus, Karl-Heinz

    2017-01-05

    Pushing the frontiers of medicinal chemistry: Christa Müller, Dominik Thimm, and Karl-Heinz Baringhaus look back at the events of the 2016 Frontiers in Medicinal Chemistry (FiMC) Conference held in Bonn, Germany. The report highlights the themes & talks in the annual conference hosted by the Joint Division of Medicinal Chemistry of the German Pharmaceutical Society (DPhG) and German Chemical Society (GDCh). It is also an invitation to the 2017 conference in Bern, Switzerland this February 12-15. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Resources for Teaching about the Frontier.

    Science.gov (United States)

    Seiter, David

    1988-01-01

    Highlights materials relating to the U.S. frontier during the 19th century by citing journal articles and documents related to this topic. Indicates the means for obtaining these works which deal with rural schooling, historical demography, Native Americans, music, revivalism, and Black cowboys. (KO)

  17. A Nonparametric Test for Seasonal Unit Roots

    OpenAIRE

    Kunst, Robert M.

    2009-01-01

    Abstract: We consider a nonparametric test for the null of seasonal unit roots in quarterly time series that builds on the RUR (records unit root) test by Aparicio, Escribano, and Sipols. We find that the test concept is more promising than a formalization of visual aids such as plots by quarter. In order to cope with the sensitivity of the original RUR test to autocorrelation under its null of a unit root, we suggest an augmentation step by autoregression. We present some evidence on the siz...

  18. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  19. 1st Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Lahiri, S; Politis, Dimitris

    2014-01-01

    This volume is composed of peer-reviewed papers that have developed from the First Conference of the International Society for NonParametric Statistics (ISNPS). This inaugural conference took place in Chalkidiki, Greece, June 15-19, 2012. It was organized with the co-sponsorship of the IMS, the ISI, and other organizations. M.G. Akritas, S.N. Lahiri, and D.N. Politis are the first executive committee members of ISNPS, and the editors of this volume. ISNPS has a distinguished Advisory Committee that includes Professors R.Beran, P.Bickel, R. Carroll, D. Cook, P. Hall, R. Johnson, B. Lindsay, E. Parzen, P. Robinson, M. Rosenblatt, G. Roussas, T. SubbaRao, and G. Wahba. The Charting Committee of ISNPS consists of more than 50 prominent researchers from all over the world.   The chapters in this volume bring forth recent advances and trends in several areas of nonparametric statistics. In this way, the volume facilitates the exchange of research ideas, promotes collaboration among researchers from all over the wo...

  20. On Parametric (and Non-Parametric Variation

    Directory of Open Access Journals (Sweden)

    Neil Smith

    2009-11-01

    Full Text Available This article raises the issue of the correct characterization of ‘Parametric Variation’ in syntax and phonology. After specifying their theoretical commitments, the authors outline the relevant parts of the Principles–and–Parameters framework, and draw a three-way distinction among Universal Principles, Parameters, and Accidents. The core of the contribution then consists of an attempt to provide identity criteria for parametric, as opposed to non-parametric, variation. Parametric choices must be antecedently known, and it is suggested that they must also satisfy seven individually necessary and jointly sufficient criteria. These are that they be cognitively represented, systematic, dependent on the input, deterministic, discrete, mutually exclusive, and irreversible.

  1. Risk appetite : reaching for the frontier

    NARCIS (Netherlands)

    dr. A.F. de Wild

    2015-01-01

    Deze poster vat de resultaten samen van een onderzoek in spelvorm gehouden onder 56 Nederlandse risicomanagement professionals. Onderzocht werd of zij in staat waren risico’s optimaal te managen. In het spel konden 10 optimale spelstrategieën gespeeld worden die samen een efficient frontier vormden.

  2. An Efficient, Noniterative Method of Identifying the Cost-Effectiveness Frontier.

    Science.gov (United States)

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D

    2016-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we also provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. © The Author(s) 2015.

  3. The two frontiers of physics

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    In March at Garching, near Munich, physicists from different walks of life together took another hard look at the two major frontiers of physics – the very large and the infinitesimally small. Organized jointly by CERN and the European Southern Observatory (ESO), the Garching 'Symposium on Cosmology, Astronomy and Fundamental Physics' was the second in a series launched at CERN in November 1983.

  4. Glaucoma Monitoring in a Clinical Setting Glaucoma Progression Analysis vs Nonparametric Progression Analysis in the Groningen Longitudinal Glaucoma Study

    NARCIS (Netherlands)

    Wesselink, Christiaan; Heeg, Govert P.; Jansonius, Nomdo M.

    Objective: To compare prospectively 2 perimetric progression detection algorithms for glaucoma, the Early Manifest Glaucoma Trial algorithm (glaucoma progression analysis [GPA]) and a nonparametric algorithm applied to the mean deviation (MD) (nonparametric progression analysis [NPA]). Methods:

  5. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  6. A general approach to posterior contraction in nonparametric inverse problems

    NARCIS (Netherlands)

    Knapik, Bartek; Salomond, Jean Bernard

    In this paper, we propose a general method to derive an upper bound for the contraction rate of the posterior distribution for nonparametric inverse problems. We present a general theorem that allows us to derive contraction rates for the parameter of interest from contraction rates of the related

  7. The low-energy frontier of particle physics

    International Nuclear Information System (INIS)

    Jaeckel, Joerg

    2010-02-01

    Most embeddings of the Standard Model into a more unified theory, in particular the ones based on supergravity or superstrings, predict the existence of a hidden sector of particles which have only very weak interactions with the visible sector Standard Model particles. Some of these exotic particle candidates (such as e.g. ''axions'', ''axion-like particles'' and ''hidden U(1) gauge bosons'') may be very light, with masses in the sub-eV range, and have very weak interactions with photons. Correspondingly, these very weakly interacting sub-eV particles (WISPs) may lead to observable effects in experiments (as well as in astrophysical and cosmological observations) searching for light shining through a wall, for changes in laser polarisation, for non-linear processes in large electromagnetic fields and for deviations from Coulomb's law. We present the physics case and a status report of this emerging low-energy frontier of fundamental physics. (orig.)

  8. The "Frontier" And Frontier Guards in Banat - a Socio- Historical Approach

    Directory of Open Access Journals (Sweden)

    SEBASTIAN ŞTEFĂNUCĂ

    2010-06-01

    Full Text Available The researchers preoccupied with the regional identity potential of Ţara Almăjului and the Eastern area of the Banat mountain region, Romania, cannot avoid the particular historical evolution, in the last three centuries, of these regions. This is true precisely when the starting point is represented by the Wealth Community (Comunitatea de Avere - a form of collective ownership of a large part of the forests in the abovementioned regions and of certain buildings - a direct remnant of the Austrian frontier past, which was abolished during the Communist period. At that time (the second half of the 18th century, this form of collective ownership generated deep and irreversible social, administrative, architectural, legal and economic transformations which are visible to this day. Apart from an elite preoccupied with historical studies, in relation to which we notice the open affirmation of identity valences which we look for, and apart from another elite which is interested in reinstating and managing the Wealth Community, the locals seem detached both from the past and the frontier, as well as from the attempts to reinstate the Wealth Community. The only truly relevant form of ownership is individual ownership. We consider that this attitude is a variant of what Lucian Blaga called a "boycott of history". Therefore, the identity looked for seems to be constituted not so much by opposing, than by ignoring the past and the disinterest towards collective ownership, to which we can add the suspicion with respect to the intentions of people holding positions within the local administration and state authorities generally.

  9. Measuring energy efficiency under heterogeneous technologies using a latent class stochastic frontier approach: An application to Chinese energy economy

    International Nuclear Information System (INIS)

    Lin, Boqiang; Du, Kerui

    2014-01-01

    The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime

  10. Mean-variance portfolio selection and efficient frontier for defined contribution pension schemes

    DEFF Research Database (Denmark)

    Højgaard, Bjarne; Vigna, Elena

    We solve a mean-variance portfolio selection problem in the accumulation phase of a defined contribution pension scheme. The efficient frontier, which is found for the 2 asset case as well as the n + 1 asset case, gives the member the possibility to decide his own risk/reward profile. The mean...... as a mean-variance optimization problem. It is shown that the corresponding mean and variance of the final fund belong to the efficient frontier and also the opposite, that each point on the efficient frontier corresponds to a target-based optimization problem. Furthermore, numerical results indicate...... that the largely adopted lifestyle strategy seems to be very far from being efficient in the mean-variance setting....

  11. Old Borders and New Bordering Capabilities: Cities as Frontier Zones

    Directory of Open Access Journals (Sweden)

    Saskia Sassen

    2015-12-01

    Full Text Available The global city is a new frontier zone. Deregulation, privatization, and new fiscal and monetary policies create the formal instruments to construct their equivalent of the old military “fort”. The city is also a strategic frontier zone for those who lack power, and allows the making of informal politics. At the same time the border is a mix of regimes, marked by protections and opportunities for corporations and high-level professionals, and implies confinement, capture and detention for migrants. The essay discusses the transformation of the city in a frontier zone and analyses the separation between the capabilities entailed by territoriality and the geographic territory tout court. The analysis focuses on the effects of neoliberal policies that, far from making this a borderless world, have actually multiplied the bordered spaces that allow firms and markets to move across conventional borders. Cities are therefore one of the key sites where new neoliberal norms are made and where new identities emerge.

  12. Frontiers in Superconducting Materials

    CERN Document Server

    Narlikar, Anant V

    2005-01-01

    Frontiers in Superconducting Materials gives a state-of-the-art report of the most important topics of the current research in superconductive materials and related phenomena. It comprises 30 chapters written by renowned international experts in the field. It is of central interest to researchers and specialists in Physics and Materials Science, both in academic and industrial research, as well as advanced students. It also addresses electronic and electrical engineers. Even non-specialists interested in superconductivity might find some useful answers.

  13. Prairie, gold and frontier

    International Nuclear Information System (INIS)

    Chirico, S.

    2005-01-01

    ThIs work deals with the mining history of the region of Cunapiru, Uruguay. Its process develops inside a rural world, and in this aspect it is not very different to other praires of similar geographic zones.Nevertheless, the fact of being a frontier territory makes it singular, different, and peculiar enough to transform this praire deeply. Memories of prosperity times nurture a centenarian illusion of manfified, inexact dating or significance facts. However, all that memories were essentials to collective identify.

  14. Application of nonparametric statistics to material strength/reliability assessment

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-01-01

    An advanced material technology requires data base on a wide variety of material behavior which need to be established experimentally. It may often happen that experiments are practically limited in terms of reproducibility or a range of test parameters. Statistical methods can be applied to understanding uncertainties in such a quantitative manner as required from the reliability point of view. Statistical assessment involves determinations of a most probable value and the maximum and/or minimum value as one-sided or two-sided confidence limit. A scatter of test data can be approximated by a theoretical distribution only if the goodness of fit satisfies a test criterion. Alternatively, nonparametric statistics (NPS) or distribution-free statistics can be applied. Mathematical procedures by NPS are well established for dealing with most reliability problems. They handle only order statistics of a sample. Mathematical formulas and some applications to engineering assessments are described. They include confidence limits of median, population coverage of sample, required minimum number of a sample, and confidence limits of fracture probability. These applications demonstrate that a nonparametric statistical estimation is useful in logical decision making in the case a large uncertainty exists. (author)

  15. CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions

    Science.gov (United States)

    Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.

  16. Exact nonparametric inference for detection of nonlinear determinism

    OpenAIRE

    Luo, Xiaodong; Zhang, Jie; Small, Michael; Moroz, Irene

    2005-01-01

    We propose an exact nonparametric inference scheme for the detection of nonlinear determinism. The essential fact utilized in our scheme is that, for a linear stochastic process with jointly symmetric innovations, its ordinary least square (OLS) linear prediction error is symmetric about zero. Based on this viewpoint, a class of linear signed rank statistics, e.g. the Wilcoxon signed rank statistic, can be derived with the known null distributions from the prediction error. Thus one of the ad...

  17. The two frontiers of physics

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    In March at Garching, near Munich, physicists from different walks of life together took another hard look at the two major frontiers of physics – the very large and the infinitesimally small. Organized jointly by CERN and the European Southern Observatory (ESO), the Garching 'Symposium on Cosmology, Astronomy and Fundamental Physics' was the second in a series launched at CERN in November 1983

  18. Measuring Efficiency of Health Systems of the Middle East and North Africa (MENA) Region Using Stochastic Frontier Analysis.

    Science.gov (United States)

    Hamidi, Samer; Akinci, Fevzi

    2016-06-01

    The main purpose of this study is to measure the technical efficiency of twenty health systems in the Middle East and North Africa (MENA) region to inform evidence-based health policy decisions. In addition, the effects of alternative stochastic frontier model specification on the empirical results are examined. We conducted a stochastic frontier analysis to estimate the country-level technical efficiencies using secondary panel data for 20 MENA countries for the period of 1995-2012 from the World Bank database. We also tested the effect of alternative frontier model specification using three random-effects approaches: a time-invariant model where efficiency effects are assumed to be static with regard to time, and a time-varying efficiency model where efficiency effects have temporal variation, and one model to account for heterogeneity. The average estimated technical inefficiency of health systems in the MENA region was 6.9 % with a range of 5.7-7.9 % across the three models. Among the top performers, Lebanon, Qatar, and Morocco are ranked consistently high according to the three different inefficiency model specifications. On the opposite side, Sudan, Yemen and Djibouti ranked among the worst performers. On average, the two most technically efficient countries were Qatar and Lebanon. We found that the estimated technical efficiency scores vary substantially across alternative parametric models. Based on the findings reported in this study, most MENA countries appear to be operating, on average, with a reasonably high degree of technical efficiency compared with other countries in the region. However, there is evidence to suggest that there are considerable efficiency gains yet to be made by some MENA countries. Additional empirical research is needed to inform future health policies aimed at improving both the efficiency and sustainability of the health systems in the MENA region.

  19. Fifth German-American Frontiers of Engineering Symposium

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2002-05-01

    The agenda book for the Fifth German-American Frontiers of Engineering Symposium contains abstracts of the 16 presentations as well as information on the program, bios of the speakers, contact information for all attendees, and background on the activity.

  20. Expanding the Frontiers of Population Nutrition Research: New Questions, New Methods, and New Approaches12

    Science.gov (United States)

    Pelletier, David L.; Porter, Christine M.; Aarons, Gregory A.; Wuehler, Sara E.; Neufeld, Lynnette M.

    2013-01-01

    Nutrition research, ranging from molecular to population levels and all points along this spectrum, is exploring new frontiers as new technologies and societal changes create new possibilities and demands. This paper defines a set of frontiers at the population level that are being created by the increased societal recognition of the importance of nutrition; its connection to urgent health, social, and environmental problems; and the need for effective and sustainable solutions at the population level. The frontiers are defined in terms of why, what, who, and how we study at the population level and the disciplinary foundations for that research. The paper provides illustrations of research along some of these frontiers, an overarching framework for population nutrition research, and access to some of the literature from outside of nutrition that can enhance the intellectual coherence, practical utility, and societal benefit of population nutrition research. The frontiers defined in this paper build on earlier forward-looking efforts by the American Society for Nutrition and extend these efforts in significant ways. The American Society for Nutrition and its members can play pivotal roles in advancing these frontiers by addressing a number of well-recognized challenges associated with transdisciplinary and engaged research. PMID:23319128

  1. Frontiers International Conference on Wastewater Treatment

    CERN Document Server

    2017-01-01

    This book describes the latest research advances, innovations, and applications in the field of water management and environmental engineering as presented by leading researchers, engineers, life scientists and practitioners from around the world at the Frontiers International Conference on Wastewater Treatment (FICWTM), held in Palermo, Italy in May 2017. The topics covered are highly diverse and include the physical processes of mixing and dispersion, biological developments and mathematical modeling, such as computational fluid dynamics in wastewater, MBBR and hybrid systems, membrane bioreactors, anaerobic digestion, reduction of greenhouse gases from wastewater treatment plants, and energy optimization. The contributions amply demonstrate that the application of cost-effective technologies for waste treatment and control is urgently needed so as to implement appropriate regulatory measures that ensure pollution prevention and remediation, safeguard public health, and preserve the environment. The contrib...

  2. Efficient nonparametric n -body force fields from machine learning

    Science.gov (United States)

    Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro

    2018-05-01

    We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.

  3. Semi-nonparametric estimates of interfuel substitution in US energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Serletis, A.; Shahmoradi, A. [University of Calgary, Calgary, AB (Canada). Dept. of Economics

    2008-09-15

    This paper focuses on the demand for crude oil, natural gas, and coal in the United States in the context of two globally flexible functional forms - the Fourier and the Asymptotically Ideal Model (AIM) - estimated subject to full regularity, using methods suggested over 20 years ago by Gallant and Golub (Gallant, A. Ronald and Golub, Gene H. Imposing Curvature Restrictions on Flexible Functional Forms. Journal of Econometrics 26 (1984), 295-321) and recently used by Serletis and Shahmoradi (Serletis, A., Shahmoradi, A., 2005. Semi-nonparametric estimates of the demand for money in the United States. Macroeconomic Dynamics 9, 542-559) in the monetary demand systems literature. We provide a comparison in terms of a full set of elasticities and also a policy perspective, using (for the first time) parameter estimates that are consistent with global regularity.

  4. Applied thermodynamics: A new frontier for biotechnology

    DEFF Research Database (Denmark)

    Mollerup, Jørgen

    2006-01-01

    The scientific career of one of the most outstanding scientists in molecular thermodynamics, Professor John M. Prausnitz at Berkeley, reflects the change in the agenda of molecular thermodynamics, from hydrocarbon chemistry to biotechnology. To make thermodynamics a frontier for biotechnology...

  5. THE INVISIBLE FRONTIER: THE CURRENT LIMITS OF ...

    African Journals Online (AJOL)

    tions, area-wide or regional development organizations, specialized functional authorities or ... regional or functional development authorities, parastatal organizations, or special project ..... frontiers between urban, peri-urban and rural activity are blurring and merging. .... KOPP, A., 1998. Networking and rural development.

  6. A non-parametric meta-analysis approach for combining independent microarray datasets: application using two microarray datasets pertaining to chronic allograft nephropathy

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2008-02-01

    Full Text Available Abstract Background With the popularity of DNA microarray technology, multiple groups of researchers have studied the gene expression of similar biological conditions. Different methods have been developed to integrate the results from various microarray studies, though most of them rely on distributional assumptions, such as the t-statistic based, mixed-effects model, or Bayesian model methods. However, often the sample size for each individual microarray experiment is small. Therefore, in this paper we present a non-parametric meta-analysis approach for combining data from independent microarray studies, and illustrate its application on two independent Affymetrix GeneChip studies that compared the gene expression of biopsies from kidney transplant recipients with chronic allograft nephropathy (CAN to those with normal functioning allograft. Results The simulation study comparing the non-parametric meta-analysis approach to a commonly used t-statistic based approach shows that the non-parametric approach has better sensitivity and specificity. For the application on the two CAN studies, we identified 309 distinct genes that expressed differently in CAN. By applying Fisher's exact test to identify enriched KEGG pathways among those genes called differentially expressed, we found 6 KEGG pathways to be over-represented among the identified genes. We used the expression measurements of the identified genes as predictors to predict the class labels for 6 additional biopsy samples, and the predicted results all conformed to their pathologist diagnosed class labels. Conclusion We present a new approach for combining data from multiple independent microarray studies. This approach is non-parametric and does not rely on any distributional assumptions. The rationale behind the approach is logically intuitive and can be easily understood by researchers not having advanced training in statistics. Some of the identified genes and pathways have been

  7. Probit vs. semi-nonparametric estimation: examining the role of disability on institutional entry for older adults.

    Science.gov (United States)

    Sharma, Andy

    2017-06-01

    The purpose of this study was to showcase an advanced methodological approach to model disability and institutional entry. Both of these are important areas to investigate given the on-going aging of the United States population. By 2020, approximately 15% of the population will be 65 years and older. Many of these older adults will experience disability and require formal care. A probit analysis was employed to determine which disabilities were associated with admission into an institution (i.e. long-term care). Since this framework imposes strong distributional assumptions, misspecification leads to inconsistent estimators. To overcome such a short-coming, this analysis extended the probit framework by employing an advanced semi-nonparamertic maximum likelihood estimation utilizing Hermite polynomial expansions. Specification tests show semi-nonparametric estimation is preferred over probit. In terms of the estimates, semi-nonparametric ratios equal 42 for cognitive difficulty, 64 for independent living, and 111 for self-care disability while probit yields much smaller estimates of 19, 30, and 44, respectively. Public health professionals can use these results to better understand why certain interventions have not shown promise. Equally important, healthcare workers can use this research to evaluate which type of treatment plans may delay institutionalization and improve the quality of life for older adults. Implications for rehabilitation With on-going global aging, understanding the association between disability and institutional entry is important in devising successful rehabilitation interventions. Semi-nonparametric is preferred to probit and shows ambulatory and cognitive impairments present high risk for institutional entry (long-term care). Informal caregiving and home-based care require further examination as forms of rehabilitation/therapy for certain types of disabilities.

  8. Nonparametric evaluation of quantitative traits in population-based association studies when the genetic model is unknown.

    Science.gov (United States)

    Konietschke, Frank; Libiger, Ondrej; Hothorn, Ludwig A

    2012-01-01

    Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible

  9. Frontiers in Magnetic Materials

    CERN Document Server

    Narlikar, Anant V

    2005-01-01

    Frontiers in Magnetic Materials focuses on the current achievements and state-of-the-art advancements in magnetic materials. Several lines of development- High-Tc Superconductivity, Nanotechnology and refined experimental techniques among them – raised knowledge and interest in magnetic materials remarkably. The book comprises 24 chapters on the most relevant topics written by renowned international experts in the field. It is of central interest to researchers and specialists in Physics and Materials Science, both in academic and industrial research, as well as advanced students.

  10. The low-energy frontier of particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Jaeckel, Joerg [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Ringwald, Andreas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2010-02-15

    Most embeddings of the Standard Model into a more unified theory, in particular the ones based on supergravity or superstrings, predict the existence of a hidden sector of particles which have only very weak interactions with the visible sector Standard Model particles. Some of these exotic particle candidates (such as e.g. ''axions'', ''axion-like particles'' and ''hidden U(1) gauge bosons'') may be very light, with masses in the sub-eV range, and have very weak interactions with photons. Correspondingly, these very weakly interacting sub-eV particles (WISPs) may lead to observable effects in experiments (as well as in astrophysical and cosmological observations) searching for light shining through a wall, for changes in laser polarisation, for non-linear processes in large electromagnetic fields and for deviations from Coulomb's law. We present the physics case and a status report of this emerging low-energy frontier of fundamental physics. (orig.)

  11. Nonparametric statistics a step-by-step approach

    CERN Document Server

    Corder, Gregory W

    2014-01-01

    "…a very useful resource for courses in nonparametric statistics in which the emphasis is on applications rather than on theory.  It also deserves a place in libraries of all institutions where introductory statistics courses are taught."" -CHOICE This Second Edition presents a practical and understandable approach that enhances and expands the statistical toolset for readers. This book includes: New coverage of the sign test and the Kolmogorov-Smirnov two-sample test in an effort to offer a logical and natural progression to statistical powerSPSS® (Version 21) software and updated screen ca

  12. A structural nonparametric reappraisal of the CO2 emissions-income relationship

    NARCIS (Netherlands)

    Azomahou, T.T.; Goedhuys - Degelin, Micheline; Nguyen-Van, P.

    Relying on a structural nonparametric estimation, we show that co2 emissions clearly increase with income at low income levels. For higher income levels, we observe a decreasing relationship, though not significant. We also find thatco2 emissions monotonically increases with energy use at a

  13. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...

  14. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)

  15. Energy not the only frontier

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    In the major world areas active In high energy physics, proposals have been prepared for new machines to manufacture intense beams of strongly interacting particles (hadrons) to complement the physics coming in from the high energy frontier. An information session on these plans for intense hadron facilities was included in the Third International Conference on the Intersections between Particle and Nuclear Physics, held in Rockport, Maine, in May

  16. Stratigraphy and structural setting of Upper Cretaceous Frontier Formation, western Centennial Mountains, southwestern Montana and southeastern Idaho

    Science.gov (United States)

    Dyman, T.S.; Tysdal, R.G.; Perry, W.J.; Nichols, D.J.; Obradovich, J.D.

    2008-01-01

    Stratigraphic, sedimentologic, and palynologic data were used to correlate the Frontier Formation of the western Centennial Mountains with time-equivalent rocks in the Lima Peaks area and other nearby areas in southwestern Montana. The stratigraphic interval studied is in the middle and upper parts (but not uppermost) of the formation based on a comparison of sandstone petrography, palynologic age data, and our interpretation of the structure using a seismic line along the frontal zone of the Centennial Mountains and the adjacent Centennial Valley. The Frontier Formation is comprised of sandstone, siltstone, mudstone, limestone, and silty shale in fluvial and coastal depositional settings. A distinctive characteristic of these strata in the western Centennial Mountains is the absence of conglomerate and conglomeratic sandstone beds. Absence of conglomerate beds may be due to lateral facies changes associated with fluvial systems, a distal fining of grain size, and the absence of both uppermost and lower Frontier rocks in the study area. Palynostratigraphic data indicate a Coniacian age for the Frontier Formation in the western Centennial Mountains. These data are supported by a geochronologic age from the middle part of the Frontier at Lima Peaks indicating a possible late Coniacian-early Santonian age (86.25 ?? 0.38 Ma) for the middle Frontier there. The Frontier Formation in the western Centennial Mountains is comparable in age and thickness to part of the Frontier at Lima Peaks. These rocks represent one of the thickest known sequences of Frontier strata in the Rocky Mountain region. Deposition was from about 95 to 86 Ma (middle Cenomanian to at least early Santonian), during which time, shoreface sandstone of the Telegraph Creek Formation and marine shale of the Cody Shale were deposited to the east in the area now occupied by the Madison Range in southwestern Montana. Frontier strata in the western Centennial Mountains are structurally isolated from other

  17. The Stellar Initial Mass Function in Early-type Galaxies from Absorption Line Spectroscopy. IV. A Super-Salpeter IMF in the Center of NGC 1407 from Non-parametric Models

    Energy Technology Data Exchange (ETDEWEB)

    Conroy, Charlie [Department of Astronomy, Harvard University, Cambridge, MA, 02138 (United States); Van Dokkum, Pieter G. [Department of Astronomy, Yale University, New Haven, CT, 06511 (United States); Villaume, Alexa [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)

    2017-03-10

    It is now well-established that the stellar initial mass function (IMF) can be determined from the absorption line spectra of old stellar systems, and this has been used to measure the IMF and its variation across the early-type galaxy population. Previous work focused on measuring the slope of the IMF over one or more stellar mass intervals, implicitly assuming that this is a good description of the IMF and that the IMF has a universal low-mass cutoff. In this work we consider more flexible IMFs, including two-component power laws with a variable low-mass cutoff and a general non-parametric model. We demonstrate with mock spectra that the detailed shape of the IMF can be accurately recovered as long as the data quality is high (S/N ≳ 300 Å{sup −1}) and cover a wide wavelength range (0.4–1.0 μ m). We apply these flexible IMF models to a high S/N spectrum of the center of the massive elliptical galaxy NGC 1407. Fitting the spectrum with non-parametric IMFs, we find that the IMF in the center shows a continuous rise extending toward the hydrogen-burning limit, with a behavior that is well-approximated by a power law with an index of −2.7. These results provide strong evidence for the existence of extreme (super-Salpeter) IMFs in the cores of massive galaxies.

  18. An Efficient, Non-iterative Method of Identifying the Cost-Effectiveness Frontier

    Science.gov (United States)

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D.

    2015-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we additionally provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. PMID:25926282

  19. A comparison of parametric and nonparametric methods for normalising cDNA microarray data.

    Science.gov (United States)

    Khondoker, Mizanur R; Glasbey, Chris A; Worton, Bruce J

    2007-12-01

    Normalisation is an essential first step in the analysis of most cDNA microarray data, to correct for effects arising from imperfections in the technology. Loess smoothing is commonly used to correct for trends in log-ratio data. However, parametric models, such as the additive plus multiplicative variance model, have been preferred for scale normalisation, though the variance structure of microarray data may be of a more complex nature than can be accommodated by a parametric model. We propose a new nonparametric approach that incorporates location and scale normalisation simultaneously using a Generalised Additive Model for Location, Scale and Shape (GAMLSS, Rigby and Stasinopoulos, 2005, Applied Statistics, 54, 507-554). We compare its performance in inferring differential expression with Huber et al.'s (2002, Bioinformatics, 18, 96-104) arsinh variance stabilising transformation (AVST) using real and simulated data. We show GAMLSS to be as powerful as AVST when the parametric model is correct, and more powerful when the model is wrong. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  20. The Behavior of Hydrogen Under Extreme Conditions on Ultrafast Timescales (A 'Life at the Frontiers of Energy Research' contest entry from the 2011 Energy Frontier Research Centers (EFRCs) Summit and Forum)

    International Nuclear Information System (INIS)

    Mao, Ho-kwang

    2011-01-01

    'The Behavior of Hydrogen Under Extreme Conditions on Ultrafast Timescales ' was submitted by the Center for Energy Frontier Research in Extreme Environments (EFree) to the 'Life at the Frontiers of Energy Research' video contest at the 2011 Science for Our Nation's Energy Future: Energy Frontier Research Centers (EFRCs) Summit and Forum. Twenty-six EFRCs created short videos to highlight their mission and their work. EFree is directed by Ho-kwang Mao at the Carnegie Institute of Washington and is a partnership of scientists from thirteen institutions.The Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science established the 46 Energy Frontier Research Centers (EFRCs) in 2009. These collaboratively-organized centers conduct fundamental research focused on 'grand challenges' and use-inspired 'basic research needs' recently identified in major strategic planning efforts by the scientific community. The overall purpose is to accelerate scientific progress toward meeting the nation's critical energy challenges. The mission of Energy Frontier Research in Extreme Environments is 'to accelerate the discovery and creation of energy-relevant materials using extreme pressures and temperatures.' Research topics are: catalysis (CO 2 , water), photocatalysis, solid state lighting, optics, thermelectric, phonons, thermal conductivity, solar electrodes, fuel cells, superconductivity, extreme environment, radiation effects, defects, spin dynamics, CO 2 (capture, convert, store), greenhouse gas, hydrogen (fuel, storage), ultrafast physics, novel materials synthesis, and defect tolerant materials.

  1. Mapping Process to Pattern in the Landscape Change of the Amazonian Frontier

    Science.gov (United States)

    Walker, Robert

    2003-01-01

    Changes in land use and land cover are dynamic processes reflecting a sequence of decisions made by individual land managers. In developing economies, these decisions may be embedded in the evolution of individual households, as is often the case in indigenous areas and agricultural frontiers. One goal of the present article is to address the land use and land-cover decisions of colonist farmers in the Amazon Basin as a function, in part, of household characteristics. Another goal is to generalize the issue of tropical deforestation into a broader discussion on forest dynamics. The extent of secondary forest in tropical areas has been well documented in South America and Africa. Agricultural-plot abandonment often occurs in tandem with primary forest clearance and as part of the same decision-making calculus. Consequently, tropical deforestation and forest succession are not independent processes in the landscape. This article presents a framework that integrates them into a model of forest dynamics at household level, and in so doing provides an account of the spatial pattern of deforestation that has been observed in the Amazon's colonization frontiers.

  2. Financial development and energy consumption in Central and Eastern European frontier economies

    International Nuclear Information System (INIS)

    Sadorsky, Perry

    2011-01-01

    This study examines the impact of financial development on energy consumption in a sample of 9 Central and Eastern European frontier economies. Several different measures of financial development are examined including bank related variables and stock market variables. The empirical results, obtained from dynamic panel demand models, show a positive and statistically significant relationship between financial development and energy consumption when financial development is measured using banking variables like deposit money bank assets to GDP, financial system deposits to GDP, or liquid liabilities to GDP. Of the three stock market variables investigated, only one, stock market turnover, has a positive and statistically significant impact on energy consumption. Both short-run and long-run elasticities are presented. The implications of these results for energy policy are discussed. - Research Highlights: → Financial development affects energy consumption in 9 Central and Eastern European frontier economies. → Bank variables have a larger impact on energy consumption than do stock market variables. → Long run bank elasticities range from 0.117 to 0.276. → These results have implications for energy demand forecasts and greenhouse gas emissions.

  3. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  4. Supremum Norm Posterior Contraction and Credible Sets for Nonparametric Multivariate Regression

    NARCIS (Netherlands)

    Yoo, W.W.; Ghosal, S

    2016-01-01

    In the setting of nonparametric multivariate regression with unknown error variance, we study asymptotic properties of a Bayesian method for estimating a regression function f and its mixed partial derivatives. We use a random series of tensor product of B-splines with normal basis coefficients as a

  5. Non-parametric early seizure detection in an animal model of temporal lobe epilepsy

    Science.gov (United States)

    Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.

    2008-03-01

    The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.

  6. On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests

    Directory of Open Access Journals (Sweden)

    Aaditya Ramdas

    2017-01-01

    Full Text Available Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. Inthisshortsurvey,wefocusonteststatisticsthatinvolvetheWassersteindistance. Usingan entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ plots and receiver operating characteristic or ordinal dominance (ROC/ODC curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.

  7. Use of wikis as a collaborative ICT tool for extending the frontiers of ...

    African Journals Online (AJOL)

    When Web 2.0 technologies are used in the classroom, learners and teachers are given the opportunity to extend the frontiers of knowledge by collaborating and contributing to knowledge. This paper explores the possibility of using Wikis – a Web 2.0 technology – to extend the frontiers of knowledge. It also discusses how ...

  8. Interpreting New Data from the High Energy Frontier

    Energy Technology Data Exchange (ETDEWEB)

    Thaler, Jesse [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2016-09-26

    This is the final technical report for DOE grant DE-SC0006389, "Interpreting New Data from the High Energy Frontier", describing research accomplishments by the PI in the field of theoretical high energy physics.

  9. FIFE-Jobsub: a grid submission system for intensity frontier experiments at Fermilab

    International Nuclear Information System (INIS)

    Box, Dennis

    2014-01-01

    The Fermilab Intensity Frontier Experiments use an integrated submission system known as FIFE-jobsub, part of the FIFE (Fabric for Frontier Experiments) initiative, to submit batch jobs to the Open Science Grid. FIFE-jobsub eases the burden on experimenters by integrating data transfer and site selection details in an easy to use and well-documented format. FIFE-jobsub automates tedious details of maintaining grid proxies for the lifetime of the grid job. Data transfer is handled using the Intensity Frontier Data Handling Client (IFDHC) [1] tool suite, which facilitates selecting the appropriate data transfer method from many possibilities while protecting shared resources from overload. Chaining of job dependencies into Directed Acyclic Graphs (Condor DAGS) is well supported and made easier through the use of input flags and parameters.

  10. FIFE-Jobsub: a grid submission system for intensity frontier experiments at Fermilab

    Science.gov (United States)

    Box, Dennis

    2014-06-01

    The Fermilab Intensity Frontier Experiments use an integrated submission system known as FIFE-jobsub, part of the FIFE (Fabric for Frontier Experiments) initiative, to submit batch jobs to the Open Science Grid. FIFE-jobsub eases the burden on experimenters by integrating data transfer and site selection details in an easy to use and well-documented format. FIFE-jobsub automates tedious details of maintaining grid proxies for the lifetime of the grid job. Data transfer is handled using the Intensity Frontier Data Handling Client (IFDHC) [1] tool suite, which facilitates selecting the appropriate data transfer method from many possibilities while protecting shared resources from overload. Chaining of job dependencies into Directed Acyclic Graphs (Condor DAGS) is well supported and made easier through the use of input flags and parameters.

  11. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify the functional form of the production function. Most often, the Cobb...... results—including measures that are of interest of applied economists, such as elasticities. Therefore, we propose to use nonparametric econometric methods. First, they can be applied to verify the functional form used in parametric estimations of production functions. Second, they can be directly used...

  12. Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method

    CERN Document Server

    Kulchitskii, Yu A

    2000-01-01

    Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.

  13. Assessing T cell clonal size distribution: a non-parametric approach.

    Science.gov (United States)

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  14. Performance of non-parametric algorithms for spatial mapping of tropical forest structure

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2016-08-01

    Full Text Available Abstract Background Mapping tropical forest structure is a critical requirement for accurate estimation of emissions and removals from land use activities. With the availability of a wide range of remote sensing imagery of vegetation characteristics from space, development of finer resolution and more accurate maps has advanced in recent years. However, the mapping accuracy relies heavily on the quality of input layers, the algorithm chosen, and the size and quality of inventory samples for calibration and validation. Results By using airborne lidar data as the “truth” and focusing on the mean canopy height (MCH as a key structural parameter, we test two commonly-used non-parametric techniques of maximum entropy (ME and random forest (RF for developing maps over a study site in Central Gabon. Results of mapping show that both approaches have improved accuracy with more input layers in mapping canopy height at 100 m (1-ha pixels. The bias-corrected spatial models further improve estimates for small and large trees across the tails of height distributions with a trade-off in increasing overall mean squared error that can be readily compensated by increasing the sample size. Conclusions A significant improvement in tropical forest mapping can be achieved by weighting the number of inventory samples against the choice of image layers and the non-parametric algorithms. Without future satellite observations with better sensitivity to forest biomass, the maps based on existing data will remain slightly biased towards the mean of the distribution and under and over estimating the upper and lower tails of the distribution.

  15. Doubly Nonparametric Sparse Nonnegative Matrix Factorization Based on Dependent Indian Buffet Processes.

    Science.gov (United States)

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Xu, Richard Yi Da; Luo, Xiangfeng

    2018-05-01

    Sparse nonnegative matrix factorization (SNMF) aims to factorize a data matrix into two optimized nonnegative sparse factor matrices, which could benefit many tasks, such as document-word co-clustering. However, the traditional SNMF typically assumes the number of latent factors (i.e., dimensionality of the factor matrices) to be fixed. This assumption makes it inflexible in practice. In this paper, we propose a doubly sparse nonparametric NMF framework to mitigate this issue by using dependent Indian buffet processes (dIBP). We apply a correlation function for the generation of two stick weights associated with each column pair of factor matrices while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two factor matrices will be columnwise correlated. Under this framework, two classes of correlation function are proposed: 1) using bivariate Beta distribution and 2) using Copula function. Compared with the single IBP-based NMF, this paper jointly makes two factor matrices nonparametric and sparse, which could be applied to broader scenarios, such as co-clustering. This paper is seen to be much more flexible than Gaussian process-based and hierarchial Beta process-based dIBPs in terms of allowing the two corresponding binary matrix columns to have greater variations in their nonzero entries. Our experiments on synthetic data show the merits of this paper compared with the state-of-the-art models in respect of factorization efficiency, sparsity, and flexibility. Experiments on real-world data sets demonstrate the efficiency of this paper in document-word co-clustering tasks.

  16. On the use of permutation in and the performance of a class of nonparametric methods to detect differential gene expression.

    Science.gov (United States)

    Pan, Wei

    2003-07-22

    Recently a class of nonparametric statistical methods, including the empirical Bayes (EB) method, the significance analysis of microarray (SAM) method and the mixture model method (MMM), have been proposed to detect differential gene expression for replicated microarray experiments conducted under two conditions. All the methods depend on constructing a test statistic Z and a so-called null statistic z. The null statistic z is used to provide some reference distribution for Z such that statistical inference can be accomplished. A common way of constructing z is to apply Z to randomly permuted data. Here we point our that the distribution of z may not approximate the null distribution of Z well, leading to possibly too conservative inference. This observation may apply to other permutation-based nonparametric methods. We propose a new method of constructing a null statistic that aims to estimate the null distribution of a test statistic directly. Using simulated data and real data, we assess and compare the performance of the existing method and our new method when applied in EB, SAM and MMM. Some interesting findings on operating characteristics of EB, SAM and MMM are also reported. Finally, by combining the idea of SAM and MMM, we outline a simple nonparametric method based on the direct use of a test statistic and a null statistic.

  17. Semiparametric Mixtures of Regressions with Single-index for Model Based Clustering

    OpenAIRE

    Xiang, Sijia; Yao, Weixin

    2017-01-01

    In this article, we propose two classes of semiparametric mixture regression models with single-index for model based clustering. Unlike many semiparametric/nonparametric mixture regression models that can only be applied to low dimensional predictors, the new semiparametric models can easily incorporate high dimensional predictors into the nonparametric components. The proposed models are very general, and many of the recently proposed semiparametric/nonparametric mixture regression models a...

  18. Low default credit scoring using two-class non-parametric kernel density estimation

    CSIR Research Space (South Africa)

    Rademeyer, E

    2016-12-01

    Full Text Available This paper investigates the performance of two-class classification credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and non-parametric Parzen classifiers are extended, using Bayes’ rule, to include either...

  19. Investigating the complex relationship between in situ Southern Ocean pCO2 and its ocean physics and biogeochemical drivers using a nonparametric regression approach

    CSIR Research Space (South Africa)

    Pretorius, W

    2014-01-01

    Full Text Available the relationship more accurately in terms of MSE, RMSE and MAE, than a standard parametric approach (multiple linear regression). These results provide a platform for using the developed nonparametric regression model based on in situ measurements to predict p...

  20. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  1. Evaluation of Nonparametric Probabilistic Forecasts of Wind Power

    DEFF Research Database (Denmark)

    Pinson, Pierre; Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg, orlov 31.07.2008

    Predictions of wind power production for horizons up to 48-72 hour ahead comprise a highly valuable input to the methods for the daily management or trading of wind generation. Today, users of wind power predictions are not only provided with point predictions, which are estimates of the most...... likely outcome for each look-ahead time, but also with uncertainty estimates given by probabilistic forecasts. In order to avoid assumptions on the shape of predictive distributions, these probabilistic predictions are produced from nonparametric methods, and then take the form of a single or a set...

  2. Performances of non-parametric statistics in sensitivity analysis and parameter ranking

    International Nuclear Information System (INIS)

    Saltelli, A.

    1987-01-01

    Twelve parametric and non-parametric sensitivity analysis techniques are compared in the case of non-linear model responses. The test models used are taken from the long-term risk analysis for the disposal of high level radioactive waste in a geological formation. They describe the transport of radionuclides through a set of engineered and natural barriers from the repository to the biosphere and to man. The output data from these models are the dose rates affecting the maximum exposed individual of a critical group at a given point in time. All the techniques are applied to the output from the same Monte Carlo simulations, where a modified version of Latin Hypercube method is used for the sample selection. Hypothesis testing is systematically applied to quantify the degree of confidence in the results given by the various sensitivity estimators. The estimators are ranked according to their robustness and stability, on the basis of two test cases. The conclusions are that no estimator can be considered the best from all points of view and recommend the use of more than just one estimator in sensitivity analysis

  3. Nonparametric Bayes Classification and Hypothesis Testing on Manifolds

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David

    2012-01-01

    Our first focus is prediction of a categorical response variable using features that lie on a general manifold. For example, the manifold may correspond to the surface of a hypersphere. We propose a general kernel mixture model for the joint distribution of the response and predictors, with the kernel expressed in product form and dependence induced through the unknown mixing measure. We provide simple sufficient conditions for large support and weak and strong posterior consistency in estimating both the joint distribution of the response and predictors and the conditional distribution of the response. Focusing on a Dirichlet process prior for the mixing measure, these conditions hold using von Mises-Fisher kernels when the manifold is the unit hypersphere. In this case, Bayesian methods are developed for efficient posterior computation using slice sampling. Next we develop Bayesian nonparametric methods for testing whether there is a difference in distributions between groups of observations on the manifold having unknown densities. We prove consistency of the Bayes factor and develop efficient computational methods for its calculation. The proposed classification and testing methods are evaluated using simulation examples and applied to spherical data applications. PMID:22754028

  4. Energy-saving and emission-abatement potential of Chinese coal-fired power enterprise: A non-parametric analysis

    International Nuclear Information System (INIS)

    Wei, Chu; Löschel, Andreas; Liu, Bing

    2015-01-01

    In the context of soaring demand for electricity, mitigating and controlling greenhouse gas emissions is a great challenge for China's power sector. Increasing attention has been placed on the evaluation of energy efficiency and CO 2 abatement potential in the power sector. However, studies at the micro-level are relatively rare due to serious data limitations. This study uses the 2004 and 2008 Census data of Zhejiang province to construct a non-parametric frontier in order to assess the abatement space of energy and associated CO 2 emission from China's coal-fired power enterprises. A Weighted Russell Directional Distance Function (WRDDF) is applied to construct an energy-saving potential index and a CO 2 emission-abatement potential index. Both indicators depict the inefficiency level in terms of energy utilization and CO 2 emissions of electric power plants. Our results show a substantial variation of energy-saving potential and CO 2 abatement potential among enterprises. We find that large power enterprises are less efficient in 2004, but become more efficient than smaller enterprises in 2008. State-owned enterprises (SOE) are not significantly different in 2008 from 2004, but perform better than their non-SOE counterparts in 2008. This change in performance for large enterprises and SOE might be driven by the “top-1000 Enterprise Energy Conservation Action” that was implemented in 2006. - Highlights: • Energy-saving potential and CO 2 abatement-potential for Chinese power enterprise are evaluated. • The potential to curb energy and emission shows great variation and dynamic changes. • Large enterprise is less efficient than small enterprise in 2004, but more efficient in 2008. • The state-owned enterprise performs better than non-state-owned enterprise in 2008

  5. Nonparametric estimation of age-specific reference percentile curves with radial smoothing.

    Science.gov (United States)

    Wan, Xiaohai; Qu, Yongming; Huang, Yao; Zhang, Xiao; Song, Hanping; Jiang, Honghua

    2012-01-01

    Reference percentile curves represent the covariate-dependent distribution of a quantitative measurement and are often used to summarize and monitor dynamic processes such as human growth. We propose a new nonparametric method based on a radial smoothing (RS) technique to estimate age-specific reference percentile curves assuming the underlying distribution is relatively close to normal. We compared the RS method with both the LMS and the generalized additive models for location, scale and shape (GAMLSS) methods using simulated data and found that our method has smaller estimation error than the two existing methods. We also applied the new method to analyze height growth data from children being followed in a clinical observational study of growth hormone treatment, and compared the growth curves between those with growth disorders and the general population. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Semi-nonparametric estimates of interfuel substitution in U.S. energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada); Shahmoradi, Asghar [Faculty of Economics, University of Tehran, Tehran (Iran)

    2008-09-15

    This paper focuses on the demand for crude oil, natural gas, and coal in the United States in the context of two globally flexible functional forms - the Fourier and the Asymptotically Ideal Model (AIM) - estimated subject to full regularity, using methods suggested over 20 years ago by Gallant and Golub [Gallant, A. Ronald and Golub, Gene H. Imposing Curvature Restrictions on Flexible Functional Forms. Journal of Econometrics 26 (1984), 295-321] and recently used by Serletis and Shahmoradi [Serletis, A., Shahmoradi, A., 2005. Semi-nonparametric estimates of the demand for money in the United States. Macroeconomic Dynamics 9, 542-559] in the monetary demand systems literature. We provide a comparison in terms of a full set of elasticities and also a policy perspective, using (for the first time) parameter estimates that are consistent with global regularity. (author)

  7. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    Science.gov (United States)

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Frontiers in Computer Education

    CERN Document Server

    Zhu, Egui; 2011 International Conference on Frontiers in Computer Education (ICFCE 2011)

    2012-01-01

    This book is the proceedings of the 2011 International Conference on Frontiers in Computer Education (ICFCE 2011) in Sanya, China, December 1-2, 2011. The contributions can be useful for researchers, software engineers, and programmers, all interested in promoting the computer and education development. Topics covered are computing and communication technology, network management, wireless networks, telecommunication, Signal and Image Processing, Machine Learning, educational management, educational psychology, educational system, education engineering, education technology and training.  The emphasis is on methods and calculi for computer science and education technology development, verification and verification tools support, experiences from doing developments, and the associated theoretical problems.

  9. Frontiers in nuclear chemistry

    International Nuclear Information System (INIS)

    Sood, D.D.; Reddy, A.V.R.; Pujari, P.K.

    1996-01-01

    This book contains articles on the landmarks in nuclear and radiochemistry which takes through scientific history spanning over five decades from the times of Roentgen to the middle of this century. Articles on nuclear fission and back end of the nuclear fuel cycle give an insight into the current status of this subject. Reviews on frontier areas like lanthanides, actinides, muonium chemistry, accelerator based nuclear chemistry, fast radiochemical separations and nuclear medicine bring out the multidisciplinary nature of nuclear sciences. This book also includes an article on environmental radiochemistry and safety. Chapters relevant to INIS are indexed separately

  10. Productive efficiency of tea industry: A stochastic frontier approach

    African Journals Online (AJOL)

    USER

    2010-06-21

    Jun 21, 2010 ... Key words: Technical efficiency, stochastic frontier, translog ... present low performance of the tea industry in Bangladesh. ... The Technical inefficiency effect .... administrative, technical, clerical, sales and purchase staff.

  11. Case studies, cross-site comparisons, and the challenge of generalization: comparing agent-based models of land-use change in frontier regions.

    Science.gov (United States)

    Parker, Dawn C; Entwisle, Barbara; Rindfuss, Ronald R; Vanwey, Leah K; Manson, Steven M; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P; Linderman, Marc; Rizi, S Mohammad Mussavi; Malanson, George

    2008-01-01

    Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process.

  12. A multi-instrument non-parametric reconstruction of the electron pressure profile in the galaxy cluster CLJ1226.9+3332

    Science.gov (United States)

    Romero, C.; McWilliam, M.; Macías-Pérez, J.-F.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; de Petris, M.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2018-04-01

    Context. In the past decade, sensitive, resolved Sunyaev-Zel'dovich (SZ) studies of galaxy clusters have become common. Whereas many previous SZ studies have parameterized the pressure profiles of galaxy clusters, non-parametric reconstructions will provide insights into the thermodynamic state of the intracluster medium. Aim. We seek to recover the non-parametric pressure profiles of the high redshift (z = 0.89) galaxy cluster CLJ 1226.9+3332 as inferred from SZ data from the MUSTANG, NIKA, Bolocam, and Planck instruments, which all probe different angular scales. Methods: Our non-parametric algorithm makes use of logarithmic interpolation, which under the assumption of ellipsoidal symmetry is analytically integrable. For MUSTANG, NIKA, and Bolocam we derive a non-parametric pressure profile independently and find good agreement among the instruments. In particular, we find that the non-parametric profiles are consistent with a fitted generalized Navaro-Frenk-White (gNFW) profile. Given the ability of Planck to constrain the total signal, we include a prior on the integrated Compton Y parameter as determined by Planck. Results: For a given instrument, constraints on the pressure profile diminish rapidly beyond the field of view. The overlap in spatial scales probed by these four datasets is therefore critical in checking for consistency between instruments. By using multiple instruments, our analysis of CLJ 1226.9+3332 covers a large radial range, from the central regions to the cluster outskirts: 0.05 R500 generation of SZ instruments such as NIKA2 and MUSTANG2.

  13. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    Science.gov (United States)

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  14. Nutrition Frontiers E-Newsletter | Division of Cancer Prevention

    Science.gov (United States)

    The Nutritional Science Research Group, Division of Cancer Prevention at NCI issues a quarterly electronic newsletter, Nutrition Frontiers, that highlights emerging evidence linking diet to cancer prevention and showcases recent findings about who will likely benefit most from dietary change. |

  15. Assessing T cell clonal size distribution: a non-parametric approach.

    Directory of Open Access Journals (Sweden)

    Olesya V Bolkhovskaya

    Full Text Available Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  16. Indoor Positioning Using Nonparametric Belief Propagation Based on Spanning Trees

    Directory of Open Access Journals (Sweden)

    Savic Vladimir

    2010-01-01

    Full Text Available Nonparametric belief propagation (NBP is one of the best-known methods for cooperative localization in sensor networks. It is capable of providing information about location estimation with appropriate uncertainty and to accommodate non-Gaussian distance measurement errors. However, the accuracy of NBP is questionable in loopy networks. Therefore, in this paper, we propose a novel approach, NBP based on spanning trees (NBP-ST created by breadth first search (BFS method. In addition, we propose a reliable indoor model based on obtained measurements in our lab. According to our simulation results, NBP-ST performs better than NBP in terms of accuracy and communication cost in the networks with high connectivity (i.e., highly loopy networks. Furthermore, the computational and communication costs are nearly constant with respect to the transmission radius. However, the drawbacks of proposed method are a little bit higher computational cost and poor performance in low-connected networks.

  17. A non-parametric Bayesian approach to decompounding from high frequency data

    NARCIS (Netherlands)

    Gugushvili, Shota; van der Meulen, F.H.; Spreij, Peter

    2016-01-01

    Given a sample from a discretely observed compound Poisson process, we consider non-parametric estimation of the density f0 of its jump sizes, as well as of its intensity λ0. We take a Bayesian approach to the problem and specify the prior on f0 as the Dirichlet location mixture of normal densities.

  18. FERMILAB ACCELERATOR R&D PROGRAM TOWARDS INTENSITY FRONTIER ACCELERATORS : STATUS AND PROGRESS

    Energy Technology Data Exchange (ETDEWEB)

    Shiltsev, Vladimir [Fermilab

    2016-11-15

    The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centrepiece of the US domestic HEP program at Fermilab. Operation, upgrade and development of the accelerators for the near- term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators and present its status and progress. INTENSITY FRONTIER ACCELERATORS

  19. Annual symposium on Frontiers in Science

    Energy Technology Data Exchange (ETDEWEB)

    Metzger, N.; Fulton, K.R.

    1998-12-31

    This final report summarizes activities conducted for the National Academy of Sciences' Annual Symposium on Frontiers of Science with support from the US Department of Energy for the period July 1, 1993 through May 31, 1998. During the report period, five Frontiers of Science symposia were held at the Arnold and Mabel Beckman Center of the National Academies of Sciences and Engineering. For each Symposium, an organizing committee appointed by the NAS President selected and planned the eight sessions for the Symposium and identified general participants for invitation by the NAS President. These Symposia accomplished their goal of bringing together outstanding younger (age 45 or less) scientists to hear presentations in disciplines outside their own and to discuss exciting advances and opportunities in their fields in a format that encourages, and allows adequate time for, informal one-on-one discussions among participants. Of the 458 younger scientists who participated, over a quarter (124) were women. Participant lists for all symposia (1993--1997) are attached. The scientific participants were leaders in basic research from academic, industrial, and federal laboratories in such disciplines as astronomy, astrophysics, atmospheric science, biochemistry, cell biology, chemistry, computer science, earth sciences, engineering, genetics, material sciences, mathematics, microbiology, neuroscience, physics, and physiology. For each symposia, the 24 speakers and discussants on the program were urged to focus their presentations on current cutting-edge research in their field for a scientifically sophisticated but non-specialist audience, and to provide a sense of the experimental data--what is actually measured and seen in the various fields. They were also asked to address questions such as: What are the major research problems and unique tools in their field? What are the current limitations on advances as well as the frontiers? Speakers were asked to provide a

  20. Technology Transfer Strategies for Creating Growth Opportunities in Frontier Markets of Sub-Saharan Africa

    DEFF Research Database (Denmark)

    Nielsen, Ulrik B.

    In the past decade, Africa has developed from being an extremely impoverished continent with discouraging prospects to a more promising destination and home to some of the fastest growing Frontier Market economies. Approximately 75% of Africans rely on agriculture for their livelihoods, making...... to create growth opportunities in Frontier Markets of Sub-Saharan Africa....

  1. Exact nonparametric confidence bands for the survivor function.

    Science.gov (United States)

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  2. On the Choice of Difference Sequence in a Unified Framework for Variance Estimation in Nonparametric Regression

    KAUST Repository

    Dai, Wenlin; Tong, Tiejun; Zhu, Lixing

    2017-01-01

    Difference-based methods do not require estimating the mean function in nonparametric regression and are therefore popular in practice. In this paper, we propose a unified framework for variance estimation that combines the linear regression method with the higher-order difference estimators systematically. The unified framework has greatly enriched the existing literature on variance estimation that includes most existing estimators as special cases. More importantly, the unified framework has also provided a smart way to solve the challenging difference sequence selection problem that remains a long-standing controversial issue in nonparametric regression for several decades. Using both theory and simulations, we recommend to use the ordinary difference sequence in the unified framework, no matter if the sample size is small or if the signal-to-noise ratio is large. Finally, to cater for the demands of the application, we have developed a unified R package, named VarED, that integrates the existing difference-based estimators and the unified estimators in nonparametric regression and have made it freely available in the R statistical program http://cran.r-project.org/web/packages/.

  3. On the Choice of Difference Sequence in a Unified Framework for Variance Estimation in Nonparametric Regression

    KAUST Repository

    Dai, Wenlin

    2017-09-01

    Difference-based methods do not require estimating the mean function in nonparametric regression and are therefore popular in practice. In this paper, we propose a unified framework for variance estimation that combines the linear regression method with the higher-order difference estimators systematically. The unified framework has greatly enriched the existing literature on variance estimation that includes most existing estimators as special cases. More importantly, the unified framework has also provided a smart way to solve the challenging difference sequence selection problem that remains a long-standing controversial issue in nonparametric regression for several decades. Using both theory and simulations, we recommend to use the ordinary difference sequence in the unified framework, no matter if the sample size is small or if the signal-to-noise ratio is large. Finally, to cater for the demands of the application, we have developed a unified R package, named VarED, that integrates the existing difference-based estimators and the unified estimators in nonparametric regression and have made it freely available in the R statistical program http://cran.r-project.org/web/packages/.

  4. Scaling HEP to Web size with RESTful protocols: The frontier example

    International Nuclear Information System (INIS)

    Dykstra, Dave

    2011-01-01

    The World-Wide-Web has scaled to an enormous size. The largest single contributor to its scalability is the HTTP protocol, particularly when used in conformity to REST (REpresentational State Transfer) principles. High Energy Physics (HEP) computing also has to scale to an enormous size, so it makes sense to base much of it on RESTful protocols. Frontier, which reads databases with an HTTP-based RESTful protocol, has successfully scaled to deliver production detector conditions data from both the CMS and ATLAS LHC detectors to hundreds of thousands of computer cores worldwide. Frontier is also able to re-use a large amount of standard software that runs the Web: on the clients, caches, and servers. I discuss the specific ways in which HTTP and REST enable high scalability for Frontier. I also briefly discuss another protocol used in HEP computing that is HTTP-based and RESTful, and another protocol that could benefit from it. My goal is to encourage HEP protocol designers to consider HTTP and REST whenever the same information is needed in many places.

  5. Stochastic Frontier Production Analysis of Tobacco Growers in District Mardan, Pakistan

    International Nuclear Information System (INIS)

    Saddozai, K. N; Nasrullah, M.; Khan, N. P.

    2015-01-01

    The theme of this research was to analyze the stochastic frontier production of tobacco growers. This parametric approach was encompassed to investigate the technical efficiency of growers. The primary data was gleaned during 2014-15 from sampled population of three villages namely Takkar Kali, Garo Shah and Passand Kali of Takhtbhai Tehsil, Mardan district of Khyber Pakhtunkhwa province. The multi-stage sampling technique was utilized to obtain the desired sample size of 120 tobacco growers. The major findings of stochastic production frontier analysis indicate that all variables were statistically significant and have portrayed positive contribution to tobacco production except fertilizer which was found significant but has revealed inverse relation with tobacco production. The mean technical efficiency was estimated at 0.85 depicting that tobacco growers can further amplify efficiency by 15% with given level of inputs. The inefficiency model estimates demonstrate that only experience of tobacco growers in study area was significantly decreasing the inefficiency of the growers. The study has concluded that tobacco growers are operating in the second stage of production; therefore, tobacco production can still be enhanced. It is recommended that season long trainings for tobacco growers may be undertaken by the concerned authorities to enhance the crop management skills for rational use of input. (author)

  6. Some effectos on the efficient frontier of the investment strategy: a preliminary approach

    Directory of Open Access Journals (Sweden)

    Méndez-Rodríguez, Paz

    2013-12-01

    Full Text Available In this work an indicator of the social responsibility degree of mutual funds is proposed based on the mutual fund’s screening policy and on the quality of the information provided by the fund manager. Once this indicator is obtained it is included as a constraint in the mean-variance classical optimization model. An exploratory numerical experiment is presented in order to check the possible effect on the efficient frontier of different SRI strategies.

  7. Frontiers in particle science and technology

    International Nuclear Information System (INIS)

    Goddard, D.T.; Lawson, S.; Williams, R.A.

    2002-07-01

    The study of particulate materials and interfaces is a dominant discipline within chemical, pharmaceutical, biological, mineral, energy, consumer and healthcare products sectors. The role is set to expand with advances in engineered particulates, nanoscience and innovations in materials science and processing. This book addresses some key issues in these new frontiers for the research and industrial community. Such issues will continue to impact the quality of our everyday lives

  8. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  9. Frontier and Border Regions in Early Modern Europe

    NARCIS (Netherlands)

    Esser, R.M.; Ellis, Steven G.

    2013-01-01

    That regional identities are constructed is now something of a truism in academic research. More recently regions have been conceptualized in the framework of Frontier and Border Studies, thus emphasizing their relationship to their neighbours in another state across a boundary line. In early modern

  10. Vikings and the Western Frontier

    OpenAIRE

    Wienberg, Jes

    2015-01-01

    The article investigates how and why the Vikings became world-famous. The point of departure is the World Exposition in Chicago in 1893, where an icon for the Viking, a replica of the Gokstad ship, arrived the very same day as Frederick Jackson Turner presented his frontier thesis. The origin of the word Viking, the romantic revival of the Viking, the creation of the Viking Age and the criticism of the Viking and the Viking Age is discussed. Finally the article argues that the Viking and the ...

  11. New Frontiers of Land Control

    DEFF Research Database (Denmark)

    Lee Peluso, Nancy; Lund, Christian

    2011-01-01

    rights, and territories created, extracted, produced, or protected on land. Primitive and on-going forms of accumulation, frontiers, enclosures, territories, grabs, and racializations have all been associated with mechanisms for land control. Agrarian environments have been transformed by processes of de...... analytic tools that had seemed to have timeless applicability with new frameworks, concepts, and theoretical tools. What difference does land control make? These contributions to the debates demonstrate that the answers have been shaped by conflicts, contexts, histories, and agency, as land has been...

  12. Evaluating energy efficiency for airlines: An application of Virtual Frontier Dynamic Slacks Based Measure

    International Nuclear Information System (INIS)

    Cui, Qiang; Li, Ye; Yu, Chen-lu; Wei, Yi-Ming

    2016-01-01

    The fast growing Revenue Passenger Kilometers and the relatively lagged energy supply of aviation industry impels the airlines to improve energy efficiency. In this paper, we focus on evaluating and analyzing influencing factors for airline energy efficiency. Number of employees and aviation kerosene are chosen as the inputs. Revenue Ton Kilometers, Revenue Passenger Kilometers and total business income are the outputs. Capital stock is selected as the dynamic factor. A new model, Virtual Frontier Dynamic Slacks Based Measure, is proposed to calculate the energy efficiencies of 21 airlines from 2008 to 2012. We verify two important properties to manifest the advantages of the new model. Then a regression is run to analyze the influencing factors of airline energy efficiency. The main findings are: 1. The overall energy efficiency of Malaysia Airlines is the highest during 2008–2012.2. Per capita Gross Domestic Product, the average service age of fleet size and average haul distance have significant impacts on the efficiency score. 3. The difference between full-service carriers and low-cost carriers has no significant effects on airline energy efficiency. - Highlights: • A Virtual Frontier Dynamic Slacks Based Measure is developed. • 21 airlines' energy efficiencies are evaluated. • Malaysia Airlines has the highest overall energy efficiency. • Three explanatory variables have significant impacts.

  13. Nonlinear science as a fluctuating research frontier

    International Nuclear Information System (INIS)

    He Jihuan

    2009-01-01

    Nonlinear science has had quite a triumph in all conceivable applications in science and technology, especially in high energy physics and nanotechnology. COBE, which was awarded the physics Nobel Prize in 2006, might be probably more related to nonlinear science than the Big Bang theory. Five categories of nonlinear subjects in research frontier are pointed out.

  14. Stochastic Frontier Models with Dependent Errors based on Normal and Exponential Margins || Modelos de frontera estocástica con errores dependientes basados en márgenes normal y exponencial

    Directory of Open Access Journals (Sweden)

    Gómez-Déniz, Emilio

    2017-06-01

    Full Text Available Following the recent work of Gómez-Déniz and Pérez-Rodríguez (2014, this paper extends the results obtained there to the normal-exponential distribution with dependence. Accordingly, the main aim of the present paper is to enhance stochastic production frontier and stochastic cost frontier modelling by proposing a bivariate distribution for dependent errors which allows us to nest the classical models. Closed-form expressions for the error term and technical efficiency are provided. An illustration using real data from the econometric literature is provided to show the applicability of the model proposed. || Continuando el reciente trabajo de Gómez-Déniz y Pérez-Rodríguez (2014, el presente artículo extiende los resultados obtenidos a la distribución normal-exponencial con dependencia. En consecuencia, el principal propósito de este artículo es mejorar el modelado de la frontera estocástica tanto de producción como de coste proponiendo para ello una distribución bivariante para errores dependientes que nos permitan encajar los modelos clásicos. Se obtienen las expresiones en forma cerrada para el término de error y la eficiencia técnica. Se ilustra la aplicabilidad del modelo propouesto usando datos reales existentes en la literatura econométrica.

  15. A Bayesian Nonparametric Meta-Analysis Model

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  16. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    Science.gov (United States)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  17. Frontier between medium and large break loss of coolant accidents of pressurized water reactor

    Science.gov (United States)

    Kim, Taewan

    2017-10-01

    In order to provide the probabilistic safety assessment with more realistic condition to calculate the frequency of the initiating event, a study on the frontier between medium-break and large-break loss-of-coolant-accidents has been performed by using best-estimate thermal hydraulic code, TRACE. A methodology based on the combination of the essential safety features and system parameter has been applied to the Zion nuclear power plant to evaluate the validity of the frontier utilized for the probabilistic safety assessment. The peak cladding temperature has been chosen as a relevant system parameter that represents the system behavior during the transient. The results showed that the frontier should be extended from 6 in. to 10 in. based on the required safety functions and system response.

  18. Global CO2 efficiency: Country-wise estimates using a stochastic cost frontier

    International Nuclear Information System (INIS)

    Herrala, Risto; Goel, Rajeev K.

    2012-01-01

    This paper examines global carbon dioxide (CO 2 ) efficiency by employing a stochastic cost frontier analysis of about 170 countries in 1997 and 2007. The main contribution lies in providing a new approach to environmental efficiency estimation, in which the efficiency estimates quantify the distance from the policy objective of minimum emissions. We are able to examine a very large pool of nations and provide country-wise efficiency estimates. We estimate three econometric models, corresponding with alternative interpretations of the Cancun vision (Conference of the Parties 2011). The models reveal progress in global environmental efficiency during a preceding decade. The estimates indicate vast differences in efficiency levels, and efficiency changes across countries. The highest efficiency levels are observed in Africa and Europe, while the lowest are clustered around China. The largest efficiency gains were observed in central and eastern Europe. CO 2 efficiency also improved in the US and China, the two largest emitters, but their ranking in terms of CO 2 efficiency deteriorated. Policy implications are discussed. - Highlights: ► We estimate global environmental efficiency in line with the Cancun vision, using a stochastic cost frontier. ► The study covers 170 countries during a 10 year period, ending in 2007. ► The biggest improvements occurred in Europe, and efficiency falls in South America. ► The efficiency ranking of US and China, the largest emitters, deteriorated. ► In 2007, highest efficiency was observed in Africa and Europe, and the lowest around China.

  19. Frontiere, confini, limiti: e la geografia?

    Directory of Open Access Journals (Sweden)

    Marcello Tanca

    2011-11-01

    Full Text Available The adjective "geographic" does not draw upon what is conceived and opposed to "cultural" elements, but what is resultant of the meeting and exploration of cultural trajectories - boundaries that men establish and realize to contact with the earth's reality, and their societies therein. The historian Lucien Febvre, Paul Vidal de La Blache's student and friend, wrote "in geography no issue is more important than subdivisions one". At the concept of natural frontiers (that until the 700's, indicated a physical element with greater visibility and stability over and above  any man's work, during the eighteenth century,  the problem of subdivision brought with it complications - ergo,  the identification of a criterion to divide the earth surface in parts. Now since the evidence of each geographical representation is actually the product of a self performative mechanism (Dematteis, this should ensure that we contribute, with all our collective practices, to give a meaning and a symbolic function to physical objects,  frontiers, boundaries and limits. In this respect, individual views lose the often fixed aspect of personal opinion, and the connotation of what is "natural", assumes a different cognitive meaning, along with the political and symbolic points of view that mankind often embraces.

  20. Discovery of a Supernova in HST imaging of the MACSJ0717 Frontier Field

    Science.gov (United States)

    Rodney, Steven A.; Lotz, Jennifer; Strolger, Louis-Gregory

    2013-10-01

    We report the discovery of a supernova (SN) in Hubble Space Telescope (HST) observations centered on the galaxy cluster MACSJ0717. It was discovered in the F814W (i) band of the Advanced Camera for Surveys (ACS), in observations that were collected as part of the ongoing HST Frontier Fields (HFF) program (PI:J.Lotz, HST PID 13498). The FrontierSN ID for this object is SN HFF13Zar (nicknamed "SN Zara").