WorldWideScience

Sample records for nonparametric frontier model

  1. Stochastic semi-nonparametric frontier estimation of electricity distribution networks: Application of the StoNED method in the Finnish regulatory model

    International Nuclear Information System (INIS)

    Kuosmanen, Timo

    2012-01-01

    Electricity distribution network is a prime example of a natural local monopoly. In many countries, electricity distribution is regulated by the government. Many regulators apply frontier estimation techniques such as data envelopment analysis (DEA) or stochastic frontier analysis (SFA) as an integral part of their regulatory framework. While more advanced methods that combine nonparametric frontier with stochastic error term are known in the literature, in practice, regulators continue to apply simplistic methods. This paper reports the main results of the project commissioned by the Finnish regulator for further development of the cost frontier estimation in their regulatory framework. The key objectives of the project were to integrate a stochastic SFA-style noise term to the nonparametric, axiomatic DEA-style cost frontier, and to take the heterogeneity of firms and their operating environments better into account. To achieve these objectives, a new method called stochastic nonparametric envelopment of data (StoNED) was examined. Based on the insights and experiences gained in the empirical analysis using the real data of the regulated networks, the Finnish regulator adopted the StoNED method in use from 2012 onwards.

  2. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  3. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  4. Quantal Response: Nonparametric Modeling

    Science.gov (United States)

    2017-01-01

    capture the behavior of observed phenomena. Higher-order polynomial and finite-dimensional spline basis models allow for more complicated responses as the...flexibility as these are nonparametric (not constrained to any particular functional form). These should be useful in identifying nonstandard behavior via... deviance ∆ = −2 log(Lreduced/Lfull) is defined in terms of the likelihood function L. For normal error, Lfull = 1, and based on Eq. A-2, we have log

  5. Nonparametric combinatorial sequence models.

    Science.gov (United States)

    Wauthier, Fabian L; Jordan, Michael I; Jojic, Nebojsa

    2011-11-01

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This article presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three biological sequence families which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution over sequence representations induced by the prior. By integrating out the posterior, our method compares favorably to leading binding predictors.

  6. Measuring energy performance with sectoral heterogeneity: A non-parametric frontier approach

    International Nuclear Information System (INIS)

    Wang, H.; Ang, B.W.; Wang, Q.W.; Zhou, P.

    2017-01-01

    Evaluating economy-wide energy performance is an integral part of assessing the effectiveness of a country's energy efficiency policy. Non-parametric frontier approach has been widely used by researchers for such a purpose. This paper proposes an extended non-parametric frontier approach to studying economy-wide energy efficiency and productivity performances by accounting for sectoral heterogeneity. Relevant techniques in index number theory are incorporated to quantify the driving forces behind changes in the economy-wide energy productivity index. The proposed approach facilitates flexible modelling of different sectors' production processes, and helps to examine sectors' impact on the aggregate energy performance. A case study of China's economy-wide energy efficiency and productivity performances in its 11th five-year plan period (2006–2010) is presented. It is found that sectoral heterogeneities in terms of energy performance are significant in China. Meanwhile, China's economy-wide energy productivity increased slightly during the study period, mainly driven by the technical efficiency improvement. A number of other findings have also been reported. - Highlights: • We model economy-wide energy performance by considering sectoral heterogeneity. • The proposed approach can identify sectors' impact on the aggregate energy performance. • Obvious sectoral heterogeneities are identified in evaluating China's energy performance.

  7. European regional efficiency and geographical externalities: a spatial nonparametric frontier analysis

    Science.gov (United States)

    Ramajo, Julián; Cordero, José Manuel; Márquez, Miguel Ángel

    2017-10-01

    This paper analyses region-level technical efficiency in nine European countries over the 1995-2007 period. We propose the application of a nonparametric conditional frontier approach to account for the presence of heterogeneous conditions in the form of geographical externalities. Such environmental factors are beyond the control of regional authorities, but may affect the production function. Therefore, they need to be considered in the frontier estimation. Specifically, a spatial autoregressive term is included as an external conditioning factor in a robust order- m model. Thus we can test the hypothesis of non-separability (the external factor impacts both the input-output space and the distribution of efficiencies), demonstrating the existence of significant global interregional spillovers into the production process. Our findings show that geographical externalities affect both the frontier level and the probability of being more or less efficient. Specifically, the results support the fact that the spatial lag variable has an inverted U-shaped non-linear impact on the performance of regions. This finding can be interpreted as a differential effect of interregional spillovers depending on the size of the neighboring economies: positive externalities for small values, possibly related to agglomeration economies, and negative externalities for high values, indicating the possibility of production congestion. Additionally, evidence of the existence of a strong geographic pattern of European regional efficiency is reported and the levels of technical efficiency are acknowledged to have converged during the period under analysis.

  8. Nonparametric Mixture of Regression Models.

    Science.gov (United States)

    Huang, Mian; Li, Runze; Wang, Shaoli

    2013-07-01

    Motivated by an analysis of US house price index data, we propose nonparametric finite mixture of regression models. We study the identifiability issue of the proposed models, and develop an estimation procedure by employing kernel regression. We further systematically study the sampling properties of the proposed estimators, and establish their asymptotic normality. A modified EM algorithm is proposed to carry out the estimation procedure. We show that our algorithm preserves the ascent property of the EM algorithm in an asymptotic sense. Monte Carlo simulations are conducted to examine the finite sample performance of the proposed estimation procedure. An empirical analysis of the US house price index data is illustrated for the proposed methodology.

  9. Nonparametric correlation models for portfolio allocation

    DEFF Research Database (Denmark)

    Aslanidis, Nektarios; Casas, Isabel

    2013-01-01

    This article proposes time-varying nonparametric and semiparametric estimators of the conditional cross-correlation matrix in the context of portfolio allocation. Simulations results show that the nonparametric and semiparametric models are best in DGPs with substantial variability or structural ...... currencies. Results show the nonparametric model generally dominates the others when evaluating in-sample. However, the semiparametric model is best for out-of-sample analysis....

  10. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  11. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  12. Parametric and Non-Parametric System Modelling

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg

    1999-01-01

    the focus is on combinations of parametric and non-parametric methods of regression. This combination can be in terms of additive models where e.g. one or more non-parametric term is added to a linear regression model. It can also be in terms of conditional parametric models where the coefficients...... considered. It is shown that adaptive estimation in conditional parametric models can be performed by combining the well known methods of local polynomial regression and recursive least squares with exponential forgetting. The approach used for estimation in conditional parametric models also highlights how...... networks is included. In this paper, neural networks are used for predicting the electricity production of a wind farm. The results are compared with results obtained using an adaptively estimated ARX-model. Finally, two papers on stochastic differential equations are included. In the first paper, among...

  13. Nonparametric Bayes Modeling of Multivariate Categorical Data.

    Science.gov (United States)

    Dunson, David B; Xing, Chuanhua

    2012-01-01

    Modeling of multivariate unordered categorical (nominal) data is a challenging problem, particularly in high dimensions and cases in which one wishes to avoid strong assumptions about the dependence structure. Commonly used approaches rely on the incorporation of latent Gaussian random variables or parametric latent class models. The goal of this article is to develop a nonparametric Bayes approach, which defines a prior with full support on the space of distributions for multiple unordered categorical variables. This support condition ensures that we are not restricting the dependence structure a priori. We show this can be accomplished through a Dirichlet process mixture of product multinomial distributions, which is also a convenient form for posterior computation. Methods for nonparametric testing of violations of independence are proposed, and the methods are applied to model positional dependence within transcription factor binding motifs.

  14. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  15. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  16. Frontier Analysis

    DEFF Research Database (Denmark)

    Assaf, A. George; Josiassen, Alexander

    2016-01-01

    This article presents a comprehensive review of frontier studies in the tourism literature. We discuss the main advantages and disadvantages of the various frontier approaches, in particular, the nonparametric and parametric frontier approaches. The study further differentiates between micro...

  17. A Structural Labor Supply Model with Nonparametric Preferences

    NARCIS (Netherlands)

    van Soest, A.H.O.; Das, J.W.M.; Gong, X.

    2000-01-01

    Nonparametric techniques are usually seen as a statistic device for data description and exploration, and not as a tool for estimating models with a richer economic structure, which are often required for policy analysis.This paper presents an example where nonparametric flexibility can be attained

  18. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  19. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  20. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  1. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  2. Nonparametric modeling of dynamic functional connectivity in fmri data

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer H.; Røge, Rasmus

    2015-01-01

    dynamic changes. The existing approaches modeling dynamic connectivity have primarily been based on time-windowing the data and k-means clustering. We propose a nonparametric generative model for dynamic FC in fMRI that does not rely on specifying window lengths and number of dynamic states. Rooted...

  3. The nonparametric bootstrap for the current status model

    NARCIS (Netherlands)

    Groeneboom, P.; Hendrickx, K.

    2017-01-01

    It has been proved that direct bootstrapping of the nonparametric maximum likelihood estimator (MLE) of the distribution function in the current status model leads to inconsistent confidence intervals. We show that bootstrapping of functionals of the MLE can however be used to produce valid

  4. A Bayesian Nonparametric Meta-Analysis Model

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  5. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  6. Non-Parametric Model Drift Detection

    Science.gov (United States)

    2016-07-01

    framework on two tasks in NLP domain, topic modeling, and machine translation. Our main findings are summarized as follows: • We can measure important...thank,us,me,hope,today Group num: 4, TC(X;Y_j): 0.407 4:republic,palestinian,israel, arab ,israeli,democratic,congo,mr,president,occupied Group num: 5...support,change,lessons,partnerships,l earned Group num: 35, TC(X;Y_j): 0.094 35:russian,federation,spoke,you,french,spanish, arabic ,your,chinese,sir

  7. Nonparametric Bayesian models for a spatial covariance.

    Science.gov (United States)

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  8. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Maity, Arnab

    2011-01-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work

  9. Nonparametric Bayesian models through probit stick-breaking processes.

    Science.gov (United States)

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  10. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  11. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  12. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  13. Frontiers in Atmospheric Chemistry Modelling

    Science.gov (United States)

    Colette, Augustin; Bessagnet, Bertrand; Meleux, Frederik; Rouïl, Laurence

    2013-04-01

    The first pan-European kilometre-scale atmospheric chemistry simulation is introduced. The continental-scale air pollution episode of January 2009 is modelled with the CHIMERE offline chemistry-transport model with a massive grid of 2 million horizontal points, performed on 2000 CPU of a high performance computing system hosted by the Research and Technology Computing Center at the French Alternative Energies and Atomic Energy Commission (CCRT/CEA). Besides the technical challenge, which demonstrated the robustness of the selected air quality model, we discuss the added value in terms of air pollution modelling and decision support. The comparison with in-situ observations shows that model biases are significantly improved despite some spurious added spatial variability attributed to shortcomings in the emission downscaling process and coarse resolution of the meteorological fields. The increased spatial resolution is clearly beneficial for the detection of exceedances and exposure modelling. We reveal small scale air pollution patterns that highlight the contribution of city plumes to background air pollution levels. Up to a factor 5 underestimation of the fraction of population exposed to detrimental levels of pollution can be obtained with a coarse simulation if subgrid scale correction such as urban increments are ignored. This experiment opens new perspectives for environmental decision making. After two decades of efforts to reduce air pollutant emissions across Europe, the challenge is now to find the optimal trade-off between national and local air quality management strategies. While the first approach is based on sectoral strategies and energy policies, the later builds upon new alternatives such as urban development. The strategies, the decision pathways and the involvement of individual citizen differ, and a compromise based on cost and efficiency must be found. We illustrated how high performance computing in atmospheric science can contribute to this

  14. A ¤nonparametric dynamic additive regression model for longitudinal data

    DEFF Research Database (Denmark)

    Martinussen, T.; Scheike, T. H.

    2000-01-01

    dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models......dynamic linear models, estimating equations, least squares, longitudinal data, nonparametric methods, partly conditional mean models, time-varying-coefficient models...

  15. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  16. Generative Temporal Modelling of Neuroimaging - Decomposition and Nonparametric Testing

    DEFF Research Database (Denmark)

    Hald, Ditte Høvenhoff

    The goal of this thesis is to explore two improvements for functional magnetic resonance imaging (fMRI) analysis; namely our proposed decomposition method and an extension to the non-parametric testing framework. Analysis of fMRI allows researchers to investigate the functional processes...... of the brain, and provides insight into neuronal coupling during mental processes or tasks. The decomposition method is a Gaussian process-based independent components analysis (GPICA), which incorporates a temporal dependency in the sources. A hierarchical model specification is used, featuring both...... instantaneous and convolutive mixing, and the inferred temporal patterns. Spatial maps are seen to capture smooth and localized stimuli-related components, and often identifiable noise components. The implementation is freely available as a GUI/SPM plugin, and we recommend using GPICA as an additional tool when...

  17. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  18. Simple nonparametric checks for model data fit in CAT

    NARCIS (Netherlands)

    Meijer, R.R.

    2005-01-01

    In this paper, the usefulness of several nonparametric checks is discussed in a computerized adaptive testing (CAT) context. Although there is no tradition of nonparametric scalability in CAT, it can be argued that scalability checks can be useful to investigate, for example, the quality of item

  19. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  20. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  1. A local non-parametric model for trade sign inference

    Science.gov (United States)

    Blazejewski, Adam; Coggins, Richard

    2005-03-01

    We investigate a regularity in market order submission strategies for 12 stocks with large market capitalization on the Australian Stock Exchange. The regularity is evidenced by a predictable relationship between the trade sign (trade initiator), size of the trade, and the contents of the limit order book before the trade. We demonstrate this predictability by developing an empirical inference model to classify trades into buyer-initiated and seller-initiated. The model employs a local non-parametric method, k-nearest neighbor, which in the past was used successfully for chaotic time series prediction. The k-nearest neighbor with three predictor variables achieves an average out-of-sample classification accuracy of 71.40%, compared to 63.32% for the linear logistic regression with seven predictor variables. The result suggests that a non-linear approach may produce a more parsimonious trade sign inference model with a higher out-of-sample classification accuracy. Furthermore, for most of our stocks the observed regularity in market order submissions seems to have a memory of at least 30 trading days.

  2. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    Science.gov (United States)

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Economic decision making and the application of nonparametric prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  4. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  5. Parametric vs. Nonparametric Regression Modelling within Clinical Decision Support

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Zvárová, Jana

    2017-01-01

    Roč. 5, č. 1 (2017), s. 21-27 ISSN 1805-8698 R&D Projects: GA ČR GA17-01251S Institutional support: RVO:67985807 Keywords : decision support systems * decision rules * statistical analysis * nonparametric regression Subject RIV: IN - Informatics, Computer Science OBOR OECD: Statistics and probability

  6. An adaptive distance measure for use with nonparametric models

    International Nuclear Information System (INIS)

    Garvey, D. R.; Hines, J. W.

    2006-01-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  7. Nonparametric Identification and Estimation of Finite Mixture Models of Dynamic Discrete Choices

    OpenAIRE

    Hiroyuki Kasahara; Katsumi Shimotsu

    2006-01-01

    In dynamic discrete choice analysis, controlling for unobserved heterogeneity is an important issue, and finite mixture models provide flexible ways to account for unobserved heterogeneity. This paper studies nonparametric identifiability of type probabilities and type-specific component distributions in finite mixture models of dynamic discrete choices. We derive sufficient conditions for nonparametric identification for various finite mixture models of dynamic discrete choices used in appli...

  8. NONPARAMETRIC FIXED EFFECT PANEL DATA MODELS: RELATIONSHIP BETWEEN AIR POLLUTION AND INCOME FOR TURKEY

    Directory of Open Access Journals (Sweden)

    Rabia Ece OMAY

    2013-06-01

    Full Text Available In this study, relationship between gross domestic product (GDP per capita and sulfur dioxide (SO2 and particulate matter (PM10 per capita is modeled for Turkey. Nonparametric fixed effect panel data analysis is used for the modeling. The panel data covers 12 territories, in first level of Nomenclature of Territorial Units for Statistics (NUTS, for period of 1990-2001. Modeling of the relationship between GDP and SO2 and PM10 for Turkey, the non-parametric models have given good results.

  9. Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.

    Science.gov (United States)

    Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui

    2006-01-01

    This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.

  10. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  11. Nonparametric Integrated Agrometeorological Drought Monitoring: Model Development and Application

    Science.gov (United States)

    Zhang, Qiang; Li, Qin; Singh, Vijay P.; Shi, Peijun; Huang, Qingzhong; Sun, Peng

    2018-01-01

    Drought is a major natural hazard that has massive impacts on the society. How to monitor drought is critical for its mitigation and early warning. This study proposed a modified version of the multivariate standardized drought index (MSDI) based on precipitation, evapotranspiration, and soil moisture, i.e., modified multivariate standardized drought index (MMSDI). This study also used nonparametric joint probability distribution analysis. Comparisons were done between standardized precipitation evapotranspiration index (SPEI), standardized soil moisture index (SSMI), MSDI, and MMSDI, and real-world observed drought regimes. Results indicated that MMSDI detected droughts that SPEI and/or SSMI failed to do. Also, MMSDI detected almost all droughts that were identified by SPEI and SSMI. Further, droughts detected by MMSDI were similar to real-world observed droughts in terms of drought intensity and drought-affected area. When compared to MMSDI, MSDI has the potential to overestimate drought intensity and drought-affected area across China, which should be attributed to exclusion of the evapotranspiration components from estimation of drought intensity. Therefore, MMSDI is proposed for drought monitoring that can detect agrometeorological droughts. Results of this study provide a framework for integrated drought monitoring in other regions of the world and can help to develop drought mitigation.

  12. Frontier models for evaluating environmental efficiency: an overview

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.; Wall, A.

    2014-01-01

    Our aim in this paper is to provide a succinct overview of frontier-based models used to evaluate environmental efficiency, with a special emphasis on agricultural activity. We begin by providing a brief, up-to-date review of the main approaches used to measure environmental efficiency, with

  13. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Science.gov (United States)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  14. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    Directory of Open Access Journals (Sweden)

    Jinchao Feng

    2018-03-01

    Full Text Available We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data. The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  15. A non-parametric hierarchical model to discover behavior dynamics from tracks

    NARCIS (Netherlands)

    Kooij, J.F.P.; Englebienne, G.; Gavrila, D.M.

    2012-01-01

    We present a novel non-parametric Bayesian model to jointly discover the dynamics of low-level actions and high-level behaviors of tracked people in open environments. Our model represents behaviors as Markov chains of actions which capture high-level temporal dynamics. Actions may be shared by

  16. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei

    2011-07-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.

  17. The Support Reduction Algorithm for Computing Non-Parametric Function Estimates in Mixture Models

    OpenAIRE

    GROENEBOOM, PIET; JONGBLOED, GEURT; WELLNER, JON A.

    2008-01-01

    In this paper, we study an algorithm (which we call the support reduction algorithm) that can be used to compute non-parametric M-estimators in mixture models. The algorithm is compared with natural competitors in the context of convex regression and the ‘Aspect problem’ in quantum physics.

  18. A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)

    Science.gov (United States)

    Arenson, Ethan A.; Karabatsos, George

    2017-01-01

    Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…

  19. A comparative study of non-parametric models for identification of ...

    African Journals Online (AJOL)

    However, the frequency response method using random binary signals was good for unpredicted white noise characteristics and considered the best method for non-parametric system identifica-tion. The autoregressive external input (ARX) model was very useful for system identification, but on applicati-on, few input ...

  20. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  1. Biomembrane Frontiers Nanostructures, Models, and the Design of Life

    CERN Document Server

    Faller, Roland; Risbud, Subhash H; Jue, Thomas

    2009-01-01

    HANDBOOK OF MODERN BIOPHYSICS Series Editor Thomas Jue, PhD Handbook of Modern Biophysics brings current biophysics topics into focus, so that biology, medical, engineering, mathematics, and physical-science students or researchers can learn fundamental concepts and the application of new techniques in addressing biomedical challenges. Chapters explicate the conceptual framework of the physics formalism and illustrate the biomedical applications. With the addition of problem sets, guides to further study, and references, the interested reader can continue to explore independently the ideas presented. Volume II: Biomembrane Frontiers: Nanostructures, Models, and the Design of Life Editors: Roland Faller, PhD, Thomas Jue, PhD, Marjorie L. Longo, PhD, and Subhash H. Risbud, PhD In Biomembrane Frontiers: Nanostructures, Models, and the Design of Life, prominent researchers have established a foundation for the study of biophysics related to the following topics: Perspectives: Complexes in Liquids, 1900–2008 Mol...

  2. Using multinomial and imprecise probability for non-parametric modelling of rainfall in Manizales (Colombia

    Directory of Open Access Journals (Sweden)

    Ibsen Chivatá Cárdenas

    2008-05-01

    Full Text Available This article presents a rainfall model constructed by applying non-parametric modelling and imprecise probabilities; these tools were used because there was not enough homogeneous information in the study area. The area’s hydro-logical information regarding rainfall was scarce and existing hydrological time series were not uniform. A distributed extended rainfall model was constructed from so-called probability boxes (p-boxes, multinomial probability distribu-tion and confidence intervals (a friendly algorithm was constructed for non-parametric modelling by combining the last two tools. This model confirmed the high level of uncertainty involved in local rainfall modelling. Uncertainty en-compassed the whole range (domain of probability values thereby showing the severe limitations on information, leading to the conclusion that a detailed estimation of probability would lead to significant error. Nevertheless, rele-vant information was extracted; it was estimated that maximum daily rainfall threshold (70 mm would be surpassed at least once every three years and the magnitude of uncertainty affecting hydrological parameter estimation. This paper’s conclusions may be of interest to non-parametric modellers and decisions-makers as such modelling and imprecise probability represents an alternative for hydrological variable assessment and maybe an obligatory proce-dure in the future. Its potential lies in treating scarce information and represents a robust modelling strategy for non-seasonal stochastic modelling conditions

  3. Nonparametric NAR-ARCH Modelling of Stock Prices by the Kernel Methodology

    Directory of Open Access Journals (Sweden)

    Mohamed Chikhi

    2018-02-01

    Full Text Available This paper analyses cyclical behaviour of Orange stock price listed in French stock exchange over 01/03/2000 to 02/02/2017 by testing the nonlinearities through a class of conditional heteroscedastic nonparametric models. The linearity and Gaussianity assumptions are rejected for Orange Stock returns and informational shocks have transitory effects on returns and volatility. The forecasting results show that Orange stock prices are short-term predictable and nonparametric NAR-ARCH model has better performance over parametric MA-APARCH model for short horizons. Plus, the estimates of this model are also better comparing to the predictions of the random walk model. This finding provides evidence for weak form of inefficiency in Paris stock market with limited rationality, thus it emerges arbitrage opportunities.

  4. Promotion time cure rate model with nonparametric form of covariate effects.

    Science.gov (United States)

    Chen, Tianlei; Du, Pang

    2018-05-10

    Survival data with a cured portion are commonly seen in clinical trials. Motivated from a biological interpretation of cancer metastasis, promotion time cure model is a popular alternative to the mixture cure rate model for analyzing such data. The existing promotion cure models all assume a restrictive parametric form of covariate effects, which can be incorrectly specified especially at the exploratory stage. In this paper, we propose a nonparametric approach to modeling the covariate effects under the framework of promotion time cure model. The covariate effect function is estimated by smoothing splines via the optimization of a penalized profile likelihood. Point-wise interval estimates are also derived from the Bayesian interpretation of the penalized profile likelihood. Asymptotic convergence rates are established for the proposed estimates. Simulations show excellent performance of the proposed nonparametric method, which is then applied to a melanoma study. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Bayesian Nonparametric Statistical Inference for Shock Models and Wear Processes.

    Science.gov (United States)

    1979-12-01

    also note that the results in Section 2 do not depend on the support of F .) This shock model have been studied by Esary, Marshall and Proschan (1973...Barlow and Proschan (1975), among others. The analogy of the shock model in risk and acturial analysis has been given by BUhlmann (1970, Chapter 2... Mathematical Statistics, Vol. 4, pp. 894-906. Billingsley, P. (1968), CONVERGENCE OF PROBABILITY MEASURES, John Wiley, New York. BUhlmann, H. (1970

  6. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  7. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  8. Nonparametric Estimation of Regression Parameters in Measurement Error Models

    Czech Academy of Sciences Publication Activity Database

    Ehsanes Saleh, A.K.M.D.; Picek, J.; Kalina, Jan

    2009-01-01

    Roč. 67, č. 2 (2009), s. 177-200 ISSN 0026-1424 Grant - others:GA AV ČR(CZ) IAA101120801; GA MŠk(CZ) LC06024 Institutional research plan: CEZ:AV0Z10300504 Keywords : asymptotic relative efficiency(ARE) * asymptotic theory * emaculate mode * Me model * R-estimation * Reliabilty ratio(RR) Subject RIV: BB - Applied Statistics, Operational Research

  9. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  10. Bayesian Non-Parametric Mixtures of GARCH(1,1 Models

    Directory of Open Access Journals (Sweden)

    John W. Lau

    2012-01-01

    Full Text Available Traditional GARCH models describe volatility levels that evolve smoothly over time, generated by a single GARCH regime. However, nonstationary time series data may exhibit abrupt changes in volatility, suggesting changes in the underlying GARCH regimes. Further, the number and times of regime changes are not always obvious. This article outlines a nonparametric mixture of GARCH models that is able to estimate the number and time of volatility regime changes by mixing over the Poisson-Kingman process. The process is a generalisation of the Dirichlet process typically used in nonparametric models for time-dependent data provides a richer clustering structure, and its application to time series data is novel. Inference is Bayesian, and a Markov chain Monte Carlo algorithm to explore the posterior distribution is described. The methodology is illustrated on the Standard and Poor's 500 financial index.

  11. Scalable Bayesian nonparametric regression via a Plackett-Luce model for conditional ranks

    Science.gov (United States)

    Gray-Davies, Tristan; Holmes, Chris C.; Caron, François

    2018-01-01

    We present a novel Bayesian nonparametric regression model for covariates X and continuous response variable Y ∈ ℝ. The model is parametrized in terms of marginal distributions for Y and X and a regression function which tunes the stochastic ordering of the conditional distributions F (y|x). By adopting an approximate composite likelihood approach, we show that the resulting posterior inference can be decoupled for the separate components of the model. This procedure can scale to very large datasets and allows for the use of standard, existing, software from Bayesian nonparametric density estimation and Plackett-Luce ranking estimation to be applied. As an illustration, we show an application of our approach to a US Census dataset, with over 1,300,000 data points and more than 100 covariates. PMID:29623150

  12. Frontiers in Anisotropic Shock-Wave Modeling

    Science.gov (United States)

    2012-02-01

    Epoxy IFPT simulated and experimental back surface velocities for 572, 788, and 1015 m/s. The experimental data Kevlar / Epoxy materials recovered after...model development for the Nextel and Kevlar / Epoxy materials subject to hypervelocity impact. They also performed the experimental inverse flyer test...IFPT) for Nextel and Kevlar / Epoxy . Their models were to be macro-mechanically based and suitable for implementation into a hydrocode coupled with EOS

  13. Bayesian Bandwidth Selection for a Nonparametric Regression Model with Mixed Types of Regressors

    Directory of Open Access Journals (Sweden)

    Xibin Zhang

    2016-04-01

    Full Text Available This paper develops a sampling algorithm for bandwidth estimation in a nonparametric regression model with continuous and discrete regressors under an unknown error density. The error density is approximated by the kernel density estimator of the unobserved errors, while the regression function is estimated using the Nadaraya-Watson estimator admitting continuous and discrete regressors. We derive an approximate likelihood and posterior for bandwidth parameters, followed by a sampling algorithm. Simulation results show that the proposed approach typically leads to better accuracy of the resulting estimates than cross-validation, particularly for smaller sample sizes. This bandwidth estimation approach is applied to nonparametric regression model of the Australian All Ordinaries returns and the kernel density estimation of gross domestic product (GDP growth rates among the organisation for economic co-operation and development (OECD and non-OECD countries.

  14. At the biological modeling and simulation frontier.

    Science.gov (United States)

    Hunt, C Anthony; Ropella, Glen E P; Lam, Tai Ning; Tang, Jonathan; Kim, Sean H J; Engelberg, Jesse A; Sheikh-Bahaei, Shahab

    2009-11-01

    We provide a rationale for and describe examples of synthetic modeling and simulation (M&S) of biological systems. We explain how synthetic methods are distinct from familiar inductive methods. Synthetic M&S is a means to better understand the mechanisms that generate normal and disease-related phenomena observed in research, and how compounds of interest interact with them to alter phenomena. An objective is to build better, working hypotheses of plausible mechanisms. A synthetic model is an extant hypothesis: execution produces an observable mechanism and phenomena. Mobile objects representing compounds carry information enabling components to distinguish between them and react accordingly when different compounds are studied simultaneously. We argue that the familiar inductive approaches contribute to the general inefficiencies being experienced by pharmaceutical R&D, and that use of synthetic approaches accelerates and improves R&D decision-making and thus the drug development process. A reason is that synthetic models encourage and facilitate abductive scientific reasoning, a primary means of knowledge creation and creative cognition. When synthetic models are executed, we observe different aspects of knowledge in action from different perspectives. These models can be tuned to reflect differences in experimental conditions and individuals, making translational research more concrete while moving us closer to personalized medicine.

  15. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  16. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    Science.gov (United States)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  17. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  18. Impossible Frontiers

    OpenAIRE

    Brennan, Thomas J.; Lo, Andrew W.

    2009-01-01

    A key result of the Capital Asset Pricing Model (CAPM) is that the market portfolio---the portfolio of all assets in which each asset's weight is proportional to its total market capitalization---lies on the mean-variance efficient frontier, the set of portfolios having mean-variance characteristics that cannot be improved upon. Therefore, the CAPM cannot be consistent with efficient frontiers for which every frontier portfolio has at least one negative weight or short position. We call such ...

  19. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  20. Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model

    Directory of Open Access Journals (Sweden)

    Isaac Mugume

    2016-01-01

    Full Text Available Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE, the mean absolute error (MAE, mean error (ME, skewness, and the bias easy estimate (BES and nonparametric (the sign test, STM methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.

  1. Parametric, nonparametric and parametric modelling of a chaotic circuit time series

    Science.gov (United States)

    Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.

    2000-09-01

    The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.

  2. Modeling the World Health Organization Disability Assessment Schedule II using non-parametric item response models.

    Science.gov (United States)

    Galindo-Garre, Francisca; Hidalgo, María Dolores; Guilera, Georgina; Pino, Oscar; Rojo, J Emilio; Gómez-Benito, Juana

    2015-03-01

    The World Health Organization Disability Assessment Schedule II (WHO-DAS II) is a multidimensional instrument developed for measuring disability. It comprises six domains (getting around, self-care, getting along with others, life activities and participation in society). The main purpose of this paper is the evaluation of the psychometric properties for each domain of the WHO-DAS II with parametric and non-parametric Item Response Theory (IRT) models. A secondary objective is to assess whether the WHO-DAS II items within each domain form a hierarchy of invariantly ordered severity indicators of disability. A sample of 352 patients with a schizophrenia spectrum disorder is used in this study. The 36 items WHO-DAS II was administered during the consultation. Partial Credit and Mokken scale models are used to study the psychometric properties of the questionnaire. The psychometric properties of the WHO-DAS II scale are satisfactory for all the domains. However, we identify a few items that do not discriminate satisfactorily between different levels of disability and cannot be invariantly ordered in the scale. In conclusion the WHO-DAS II can be used to assess overall disability in patients with schizophrenia, but some domains are too general to assess functionality in these patients because they contain items that are not applicable to this pathology. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Non-parametric Bayesian graph models reveal community structure in resting state fMRI

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther; Madsen, Kristoffer H.; Siebner, Hartwig Roman

    2014-01-01

    Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian...... models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite...... between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background...

  4. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  5. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  6. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  7. A non-parametric consistency test of the ΛCDM model with Planck CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir; Shafieloo, Arman [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2017-09-01

    Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation of the base ΛCDM model as cosmology's gold standard.

  8. A Frontier Model for Landscape Ecology: The Tapir in Honduras

    OpenAIRE

    Kevin Flesher; Eduardo Ley

    1995-01-01

    We borrow a frontier specification from the econometrics literature to make inferences about the tolerance of the tapir to human settlements. We estimate the width of an invisible band surrounding human settlements which would act as a frontier or exclusion zone to the tapir to be around 290 meters.

  9. A semi-nonparametric mixture model for selecting functionally consistent proteins.

    Science.gov (United States)

    Yu, Lianbo; Doerge, Rw

    2010-09-28

    High-throughput technologies have led to a new era of proteomics. Although protein microarray experiments are becoming more common place there are a variety of experimental and statistical issues that have yet to be addressed, and that will carry over to new high-throughput technologies unless they are investigated. One of the largest of these challenges is the selection of functionally consistent proteins. We present a novel semi-nonparametric mixture model for classifying proteins as consistent or inconsistent while controlling the false discovery rate and the false non-discovery rate. The performance of the proposed approach is compared to current methods via simulation under a variety of experimental conditions. We provide a statistical method for selecting functionally consistent proteins in the context of protein microarray experiments, but the proposed semi-nonparametric mixture model method can certainly be generalized to solve other mixture data problems. The main advantage of this approach is that it provides the posterior probability of consistency for each protein.

  10. Using nonparametrics to specify a model to measure the value of travel time

    DEFF Research Database (Denmark)

    Fosgerau, Mogens

    2007-01-01

    Using a range of nonparametric methods, the paper examines the specification of a model to evaluate the willingness-to-pay (WTP) for travel time changes from binomial choice data from a simple time-cost trading experiment. The analysis favours a model with random WTP as the only source...... of randomness over a model with fixed WTP which is linear in time and cost and has an additive random error term. Results further indicate that the distribution of log WTP can be described as a sum of a linear index fixing the location of the log WTP distribution and an independent random variable representing...... unobserved heterogeneity. This formulation is useful for parametric modelling. The index indicates that the WTP varies systematically with income and other individual characteristics. The WTP varies also with the time difference presented in the experiment which is in contradiction of standard utility theory....

  11. Further Empirical Results on Parametric Versus Non-Parametric IRT Modeling of Likert-Type Personality Data

    Science.gov (United States)

    Maydeu-Olivares, Albert

    2005-01-01

    Chernyshenko, Stark, Chan, Drasgow, and Williams (2001) investigated the fit of Samejima's logistic graded model and Levine's non-parametric MFS model to the scales of two personality questionnaires and found that the graded model did not fit well. We attribute the poor fit of the graded model to small amounts of multidimensionality present in…

  12. Bayesian nonparametric modeling for comparison of single-neuron firing intensities.

    Science.gov (United States)

    Kottas, Athanasios; Behseta, Sam

    2010-03-01

    We propose a fully inferential model-based approach to the problem of comparing the firing patterns of a neuron recorded under two distinct experimental conditions. The methodology is based on nonhomogeneous Poisson process models for the firing times of each condition with flexible nonparametric mixture prior models for the corresponding intensity functions. We demonstrate posterior inferences from a global analysis, which may be used to compare the two conditions over the entire experimental time window, as well as from a pointwise analysis at selected time points to detect local deviations of firing patterns from one condition to another. We apply our method on two neurons recorded from the primary motor cortex area of a monkey's brain while performing a sequence of reaching tasks.

  13. Quality frontier of electricity distribution: Supply security, best practices, and underground cabling in Finland

    International Nuclear Information System (INIS)

    Saastamoinen, Antti; Kuosmanen, Timo

    2016-01-01

    Electricity distribution is a prime example of local monopoly. In most countries, the costs of electricity distribution operators are regulated by the government. However, the cost regulation may create adverse incentives to compromise the quality of service. To avoid this, cost regulation is often amended with quality incentives. This study applies theory and methods of productivity analysis to model the frontier of service quality. A semi-nonparametric estimation method is developed, which does not assume any particular functional form for the quality frontier, but can accommodate stochastic noise and heteroscedasticity. The empirical part of our paper examines how underground cabling and location affect the interruption costs. As expected, higher proportion of underground cabling decreases the level of interruption costs. The effects of cabling and location on the variance of performance are also considered. Especially the location is found to be a significant source of heteroscedasticity in the interruption costs. Finally, the proposed quality frontier benchmark is compared to the current practice of Finnish regulation system. The proposed quality frontier is found to provide more meaningful and stable basis for setting quality targets than the average practice benchmarks currently in use. - Highlights: • Cost regulation may create adverse incentives to lower the service quality. • We model the frontier of service quality using a semi-nonparametric approach. • Our nonparametric frontier accounts for stochastic noise and heteroscedasticity. • We estimate how underground cabling and location affect interruption costs. • We compare our quality frontier with the current quality regulation in Finland.

  14. Nonparametric test of consistency between cosmological models and multiband CMB measurements

    Energy Technology Data Exchange (ETDEWEB)

    Aghamousa, Amir [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Shafieloo, Arman, E-mail: amir@apctp.org, E-mail: shafieloo@kasi.re.kr [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of)

    2015-06-01

    We present a novel approach to test the consistency of the cosmological models with multiband CMB data using a nonparametric approach. In our analysis we calibrate the REACT (Risk Estimation and Adaptation after Coordinate Transformation) confidence levels associated with distances in function space (confidence distances) based on the Monte Carlo simulations in order to test the consistency of an assumed cosmological model with observation. To show the applicability of our algorithm, we confront Planck 2013 temperature data with concordance model of cosmology considering two different Planck spectra combination. In order to have an accurate quantitative statistical measure to compare between the data and the theoretical expectations, we calibrate REACT confidence distances and perform a bias control using many realizations of the data. Our results in this work using Planck 2013 temperature data put the best fit ΛCDM model at 95% (∼ 2σ) confidence distance from the center of the nonparametric confidence set while repeating the analysis excluding the Planck 217 × 217 GHz spectrum data, the best fit ΛCDM model shifts to 70% (∼ 1σ) confidence distance. The most prominent features in the data deviating from the best fit ΛCDM model seems to be at low multipoles  18 < ℓ < 26 at greater than 2σ, ℓ ∼ 750 at ∼1 to 2σ and ℓ ∼ 1800 at greater than 2σ level. Excluding the 217×217 GHz spectrum the feature at ℓ ∼ 1800 becomes substantially less significance at ∼1 to 2σ confidence level. Results of our analysis based on the new approach we propose in this work are in agreement with other analysis done using alternative methods.

  15. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  16. A Nonparametric Operational Risk Modeling Approach Based on Cornish-Fisher Expansion

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhu

    2014-01-01

    Full Text Available It is generally accepted that the choice of severity distribution in loss distribution approach has a significant effect on the operational risk capital estimation. However, the usually used parametric approaches with predefined distribution assumption might be not able to fit the severity distribution accurately. The objective of this paper is to propose a nonparametric operational risk modeling approach based on Cornish-Fisher expansion. In this approach, the samples of severity are generated by Cornish-Fisher expansion and then used in the Monte Carlo simulation to sketch the annual operational loss distribution. In the experiment, the proposed approach is employed to calculate the operational risk capital charge for the overall Chinese banking. The experiment dataset is the most comprehensive operational risk dataset in China as far as we know. The results show that the proposed approach is able to use the information of high order moments and might be more effective and stable than the usually used parametric approach.

  17. Genomic outlier profile analysis: mixture models, null hypotheses, and nonparametric estimation.

    Science.gov (United States)

    Ghosh, Debashis; Chinnaiyan, Arul M

    2009-01-01

    In most analyses of large-scale genomic data sets, differential expression analysis is typically assessed by testing for differences in the mean of the distributions between 2 groups. A recent finding by Tomlins and others (2005) is of a different type of pattern of differential expression in which a fraction of samples in one group have overexpression relative to samples in the other group. In this work, we describe a general mixture model framework for the assessment of this type of expression, called outlier profile analysis. We start by considering the single-gene situation and establishing results on identifiability. We propose 2 nonparametric estimation procedures that have natural links to familiar multiple testing procedures. We then develop multivariate extensions of this methodology to handle genome-wide measurements. The proposed methodologies are compared using simulation studies as well as data from a prostate cancer gene expression study.

  18. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  19. Bayesian nonparametric generative models for causal inference with missing at random covariates.

    Science.gov (United States)

    Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J

    2018-03-26

    We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.

  20. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  1. Using Spline Regression in Semi-Parametric Stochastic Frontier Analysis: An Application to Polish Dairy Farms

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    of specifying an unsuitable functional form and thus, model misspecification and biased parameter estimates. Given these problems of the DEA and the SFA, Fan, Li and Weersink (1996) proposed a semi-parametric stochastic frontier model that estimates the production function (frontier) by non......), Kumbhakar et al. (2007), and Henningsen and Kumbhakar (2009). The aim of this paper and its main contribution to the existing literature is the estimation semi-parametric stochastic frontier models using a different non-parametric estimation technique: spline regression (Ma et al. 2011). We apply...... efficiency of Polish dairy farms contributes to the insight into this dynamic process. Furthermore, we compare and evaluate the results of this spline-based semi-parametric stochastic frontier model with results of other semi-parametric stochastic frontier models and of traditional parametric stochastic...

  2. Variable Selection for Nonparametric Gaussian Process Priors: Models and Computational Strategies.

    Science.gov (United States)

    Savitsky, Terrance; Vannucci, Marina; Sha, Naijun

    2011-02-01

    This paper presents a unified treatment of Gaussian process models that extends to data from the exponential dispersion family and to survival data. Our specific interest is in the analysis of data sets with predictors that have an a priori unknown form of possibly nonlinear associations to the response. The modeling approach we describe incorporates Gaussian processes in a generalized linear model framework to obtain a class of nonparametric regression models where the covariance matrix depends on the predictors. We consider, in particular, continuous, categorical and count responses. We also look into models that account for survival outcomes. We explore alternative covariance formulations for the Gaussian process prior and demonstrate the flexibility of the construction. Next, we focus on the important problem of selecting variables from the set of possible predictors and describe a general framework that employs mixture priors. We compare alternative MCMC strategies for posterior inference and achieve a computationally efficient and practical approach. We demonstrate performances on simulated and benchmark data sets.

  3. Marginal Bayesian nonparametric model for time to disease arrival of threatened amphibian populations.

    Science.gov (United States)

    Zhou, Haiming; Hanson, Timothy; Knapp, Roland

    2015-12-01

    The global emergence of Batrachochytrium dendrobatidis (Bd) has caused the extinction of hundreds of amphibian species worldwide. It has become increasingly important to be able to precisely predict time to Bd arrival in a population. The data analyzed herein present a unique challenge in terms of modeling because there is a strong spatial component to Bd arrival time and the traditional proportional hazards assumption is grossly violated. To address these concerns, we develop a novel marginal Bayesian nonparametric survival model for spatially correlated right-censored data. This class of models assumes that the logarithm of survival times marginally follow a mixture of normal densities with a linear-dependent Dirichlet process prior as the random mixing measure, and their joint distribution is induced by a Gaussian copula model with a spatial correlation structure. To invert high-dimensional spatial correlation matrices, we adopt a full-scale approximation that can capture both large- and small-scale spatial dependence. An efficient Markov chain Monte Carlo algorithm with delayed rejection is proposed for posterior computation, and an R package spBayesSurv is provided to fit the model. This approach is first evaluated through simulations, then applied to threatened frog populations in Sequoia-Kings Canyon National Park. © 2015, The International Biometric Society.

  4. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    Science.gov (United States)

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  5. Nonparametric modeling of US interest rate term structure dynamics and implications on the prices of derivative securities

    NARCIS (Netherlands)

    Jiang, GJ

    1998-01-01

    This paper develops a nonparametric model of interest rate term structure dynamics based an a spot rate process that permits only positive interest rates and a market price of interest rate risk that precludes arbitrage opportunities. Both the spot rate process and the market price of interest rate

  6. Nonparametric Tree-Based Predictive Modeling of Storm Outages on an Electric Distribution Network.

    Science.gov (United States)

    He, Jichao; Wanik, David W; Hartman, Brian M; Anagnostou, Emmanouil N; Astitha, Marina; Frediani, Maria E B

    2017-03-01

    This article compares two nonparametric tree-based models, quantile regression forests (QRF) and Bayesian additive regression trees (BART), for predicting storm outages on an electric distribution network in Connecticut, USA. We evaluated point estimates and prediction intervals of outage predictions for both models using high-resolution weather, infrastructure, and land use data for 89 storm events (including hurricanes, blizzards, and thunderstorms). We found that spatially BART predicted more accurate point estimates than QRF. However, QRF produced better prediction intervals for high spatial resolutions (2-km grid cells and towns), while BART predictions aggregated to coarser resolutions (divisions and service territory) more effectively. We also found that the predictive accuracy was dependent on the season (e.g., tree-leaf condition, storm characteristics), and that the predictions were most accurate for winter storms. Given the merits of each individual model, we suggest that BART and QRF be implemented together to show the complete picture of a storm's potential impact on the electric distribution network, which would allow for a utility to make better decisions about allocating prestorm resources. © 2016 Society for Risk Analysis.

  7. Two-component mixture cure rate model with spline estimated nonparametric components.

    Science.gov (United States)

    Wang, Lu; Du, Pang; Liang, Hua

    2012-09-01

    In some survival analysis of medical studies, there are often long-term survivors who can be considered as permanently cured. The goals in these studies are to estimate the noncured probability of the whole population and the hazard rate of the susceptible subpopulation. When covariates are present as often happens in practice, to understand covariate effects on the noncured probability and hazard rate is of equal importance. The existing methods are limited to parametric and semiparametric models. We propose a two-component mixture cure rate model with nonparametric forms for both the cure probability and the hazard rate function. Identifiability of the model is guaranteed by an additive assumption that allows no time-covariate interactions in the logarithm of hazard rate. Estimation is carried out by an expectation-maximization algorithm on maximizing a penalized likelihood. For inferential purpose, we apply the Louis formula to obtain point-wise confidence intervals for noncured probability and hazard rate. Asymptotic convergence rates of our function estimates are established. We then evaluate the proposed method by extensive simulations. We analyze the survival data from a melanoma study and find interesting patterns for this study. © 2011, The International Biometric Society.

  8. Robust variable selection method for nonparametric differential equation models with application to nonlinear dynamic gene regulatory network analysis.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    The gene regulation network (GRN) evaluates the interactions between genes and look for models to describe the gene expression behavior. These models have many applications; for instance, by characterizing the gene expression mechanisms that cause certain disorders, it would be possible to target those genes to block the progress of the disease. Many biological processes are driven by nonlinear dynamic GRN. In this article, we propose a nonparametric differential equation (ODE) to model the nonlinear dynamic GRN. Specially, we address following questions simultaneously: (i) extract information from noisy time course gene expression data; (ii) model the nonlinear ODE through a nonparametric smoothing function; (iii) identify the important regulatory gene(s) through a group smoothly clipped absolute deviation (SCAD) approach; (iv) test the robustness of the model against possible shortening of experimental duration. We illustrate the usefulness of the model and associated statistical methods through a simulation and a real application examples.

  9. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  10. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  11. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  12. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  13. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  15. An Evaluation of Parametric and Nonparametric Models of Fish Population Response.

    Energy Technology Data Exchange (ETDEWEB)

    Haas, Timothy C.; Peterson, James T.; Lee, Danny C.

    1999-11-01

    Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.

  16. Monitoring coastal marshes biomass with CASI: a comparison of parametric and non-parametric models

    Science.gov (United States)

    Mo, Y.; Kearney, M.

    2017-12-01

    Coastal marshes are important carbon sinks that face multiple natural and anthropogenic stresses. Optical remote sensing is a powerful tool for closely monitoring the biomass of coastal marshes. However, application of hyperspectral sensors on assessing the biomass of diverse coastal marsh ecosystems is limited. This study samples spectral and biophysical data from coastal freshwater, intermediate, brackish, and saline marshes in Louisiana, and develops parametric and non-parametric models for using the Compact Airborne Spectrographic Imager (CASI) to retrieve the marshes' biomass. Linear models and random forest models are developed from simulated CASI data (48 bands, 380-1050 nm, bandwidth 14 nm). Linear models are also developed using narrowband vegetation indices computed from all possible band combinations from the blue, red, and near infrared wavelengths. It is found that the linear models derived from the optimal narrowband vegetation indices provide strong predictions for the marshes' Leaf Area Index (LAI; R2 > 0.74 for ARVI), but not for their Aboveground Green Biomass (AGB; R2 > 0.25). The linear models derived from the simulated CASI data strongly predict the marshes' LAI (R2 = 0.93) and AGB (R2 = 0.71) and have 27 and 30 bands/variables in the final models through stepwise regression, respectively. The random forest models derived from the simulated CASI data also strongly predict the marshes' LAI and AGB (R2 = 0.91 and 0.84, respectively), where the most important variables for predicting LAI are near infrared bands at 784 and 756 nm and for predicting ABG are red bands at 684 and 670 nm. In sum, the random forest model is preferable for assessing coastal marsh biomass using CASI data as it offers high R2 for both LAI and AGB. The superior performance of the random forest model is likely to due to that it fully utilizes the full-spectrum data and makes no assumption of the approximate normality of the sampling population. This study offers solutions

  17. Nonparametric tests for censored data

    CERN Document Server

    Bagdonavicus, Vilijandas; Nikulin, Mikhail

    2013-01-01

    This book concerns testing hypotheses in non-parametric models. Generalizations of many non-parametric tests to the case of censored and truncated data are considered. Most of the test results are proved and real applications are illustrated using examples. Theories and exercises are provided. The incorrect use of many tests applying most statistical software is highlighted and discussed.

  18. Nonparametric estimation in an "illness-death" model when all transition times are interval censored

    DEFF Research Database (Denmark)

    Frydman, Halina; Gerds, Thomas; Grøn, Randi

    2013-01-01

    We develop nonparametric maximum likelihood estimation for the parameters of an irreversible Markov chain on states {0,1,2} from the observations with interval censored times of 0 → 1, 0 → 2 and 1 → 2 transitions. The distinguishing aspect of the data is that, in addition to all transition times ...

  19. Neoclassical versus Frontier Production Models ? Testing for the Skewness of Regression Residuals

    DEFF Research Database (Denmark)

    Kuosmanen, T; Fosgerau, Mogens

    2009-01-01

    The empirical literature on production and cost functions is divided into two strands. The neoclassical approach concentrates on model parameters, while the frontier approach decomposes the disturbance term to a symmetric noise term and a positively skewed inefficiency term. We propose a theoreti......The empirical literature on production and cost functions is divided into two strands. The neoclassical approach concentrates on model parameters, while the frontier approach decomposes the disturbance term to a symmetric noise term and a positively skewed inefficiency term. We propose...... a theoretical justification for the skewness of the inefficiency term, arguing that this skewness is the key testable hypothesis of the frontier approach. We propose to test the regression residuals for skewness in order to distinguish the two competing approaches. Our test builds directly upon the asymmetry...

  20. Essays on parametric and nonparametric modeling and estimation with applications to energy economics

    Science.gov (United States)

    Gao, Weiyu

    My dissertation research is composed of two parts: a theoretical part on semiparametric efficient estimation and an applied part in energy economics under different dynamic settings. The essays are related in terms of their applications as well as the way in which models are constructed and estimated. In the first essay, efficient estimation of the partially linear model is studied. We work out the efficient score functions and efficiency bounds under four stochastic restrictions---independence, conditional symmetry, conditional zero mean, and partially conditional zero mean. A feasible efficient estimation method for the linear part of the model is developed based on the efficient score. A battery of specification test that allows for choosing between the alternative assumptions is provided. A Monte Carlo simulation is also conducted. The second essay presents a dynamic optimization model for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from different sources to estimate the oil production cost function and the revenue function. We pay particular attention to the dynamic aspect of the oil production by employing petroleum-engineering software to simulate the interaction between control variables and reservoir state variables. Optimal solutions are studied under different scenarios to account for the possible changes in the exogenous variables and the uncertainty about the forecasts. The third essay examines the effect of oil price volatility on the level of innovation displayed by the U.S. economy. A measure of innovation is calculated by decomposing an output-based Malmquist index. We also construct a nonparametric measure for oil price volatility. Technical change and oil price volatility are then placed in a VAR system with oil price and a variable indicative of monetary policy. The system is estimated and analyzed for significant relationships. We find that oil price volatility displays a significant

  1. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  2. Evaluation of model-based versus non-parametric monaural noise-reduction approaches for hearing aids.

    Science.gov (United States)

    Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker

    2012-08-01

    Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.

  3. Separating environmental efficiency into production and abatement efficiency. A nonparametric model with application to U.S. power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hampf, Benjamin

    2011-08-15

    In this paper we present a new approach to evaluate the environmental efficiency of decision making units. We propose a model that describes a two-stage process consisting of a production and an end-of-pipe abatement stage with the environmental efficiency being determined by the efficiency of both stages. Taking the dependencies between the two stages into account, we show how nonparametric methods can be used to measure environmental efficiency and to decompose it into production and abatement efficiency. For an empirical illustration we apply our model to an analysis of U.S. power plants.

  4. Using continuous time stochastic modelling and nonparametric statistics to improve the quality of first principles models

    DEFF Research Database (Denmark)

    A methodology is presented that combines modelling based on first principles and data based modelling into a modelling cycle that facilitates fast decision-making based on statistical methods. A strong feature of this methodology is that given a first principles model along with process data......, the corresponding modelling cycle model of the given system for a given purpose. A computer-aided tool, which integrates the elements of the modelling cycle, is also presented, and an example is given of modelling a fed-batch bioreactor....

  5. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    Science.gov (United States)

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected

  6. Evaluation of parametric and nonparametric models to predict water flow; Avaliacao entre modelos parametricos e nao parametricos para previsao de vazoes afluentes

    Energy Technology Data Exchange (ETDEWEB)

    Marques, T.C.; Cruz Junior, G.; Vinhal, C. [Universidade Federal de Goias (UFG), Goiania, GO (Brazil). Escola de Engenharia Eletrica e de Computacao], Emails: thyago@eeec.ufg.br, gcruz@eeec.ufg.br, vinhal@eeec.ufg.br

    2009-07-01

    The goal of this paper is to present a methodology to carry out the seasonal stream flow forecasting using database of average monthly inflows of one Brazilian hydroelectric plant located at Grande, Tocantins, Paranaiba, Sao Francisco and Iguacu river's. The model is based on the Adaptive Network Based Fuzzy Inference System (ANFIS), the non-parametric model. The performance of this model was compared with a periodic autoregressive model, the parametric model. The results show that the forecasting errors of the non-parametric model considered are significantly lower than the parametric model. (author)

  7. Parametric and non-parametric models for lifespan modeling of insulation systems in electrical machines

    OpenAIRE

    Salameh , Farah; Picot , Antoine; Chabert , Marie; Maussion , Pascal

    2017-01-01

    International audience; This paper describes an original statistical approach for the lifespan modeling of electric machine insulation materials. The presented models aim to study the effect of three main stress factors (voltage, frequency and temperature) and their interactions on the insulation lifespan. The proposed methodology is applied to two different insulation materials tested in partial discharge regime. Accelerated ageing tests are organized according to experimental optimization m...

  8. Analysis about the development of mobile electronic commerce: An application of production possibility frontier model

    OpenAIRE

    Uesugi, Shiro; Okada, Hitoshi

    2012-01-01

    This study aims to further develop our previous research on production possibility frontier model (PPFM). An application of model to provide analysis on the mobile commerce survey for which data was collected in Japan und Thailand is presented. PPFM looks into the consumer behaviors as the results form the perception on the relationship between Convenience and Privacy Concerns of certain electronic commerce services. From the data of consumer surveys, PPFM is expected to provide practical sol...

  9. Trade Performance and Potential of the Philippines: An Application of Stochastic Frontier Gravity Model

    OpenAIRE

    Deluna, Roperto Jr

    2013-01-01

    This study was conducted to investigate the issue of what Philippine merchandise trade flows would be if countries operated at the frontier of the gravity model. The study sought to estimate the coefficients of the gravity model. The estimated coefficients were used to estimate merchandise export potentials and technical efficiency of each country in the sample and these were also aggregated to measure impact of country groups, RTAs and inter-regional trading agreements. Result of the ...

  10. Notes on the Implementation of Non-Parametric Statistics within the Westinghouse Realistic Large Break LOCA Evaluation Model (ASTRUM)

    International Nuclear Information System (INIS)

    Frepoli, Cesare; Oriani, Luca

    2006-01-01

    In recent years, non-parametric or order statistics methods have been widely used to assess the impact of the uncertainties within Best-Estimate LOCA evaluation models. The bounding of the uncertainties is achieved with a direct Monte Carlo sampling of the uncertainty attributes, with the minimum trial number selected to 'stabilize' the estimation of the critical output values (peak cladding temperature (PCT), local maximum oxidation (LMO), and core-wide oxidation (CWO A non-parametric order statistics uncertainty analysis was recently implemented within the Westinghouse Realistic Large Break LOCA evaluation model, also referred to as 'Automated Statistical Treatment of Uncertainty Method' (ASTRUM). The implementation or interpretation of order statistics in safety analysis is not fully consistent within the industry. This has led to an extensive public debate among regulators and researchers which can be found in the open literature. The USNRC-approved Westinghouse method follows a rigorous implementation of the order statistics theory, which leads to the execution of 124 simulations within a Large Break LOCA analysis. This is a solid approach which guarantees that a bounding value (at 95% probability) of the 95 th percentile for each of the three 10 CFR 50.46 ECCS design acceptance criteria (PCT, LMO and CWO) is obtained. The objective of this paper is to provide additional insights on the ASTRUM statistical approach, with a more in-depth analysis of pros and cons of the order statistics and of the Westinghouse approach in the implementation of this statistical methodology. (authors)

  11. Bayesian Nonparametric Hidden Markov Models with application to the analysis of copy-number-variation in mammalian genomes.

    Science.gov (United States)

    Yau, C; Papaspiliopoulos, O; Roberts, G O; Holmes, C

    2011-01-01

    We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.

  12. Does the high–tech industry consistently reduce CO{sub 2} emissions? Results from nonparametric additive regression model

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin [School of Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013 (China); Research Center of Applied Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013 (China); Lin, Boqiang, E-mail: bqlin@xmu.edu.cn [Collaborative Innovation Center for Energy Economics and Energy Policy, China Institute for Studies in Energy Policy, Xiamen University, Xiamen, Fujian 361005 (China)

    2017-03-15

    China is currently the world's largest carbon dioxide (CO{sub 2}) emitter. Moreover, total energy consumption and CO{sub 2} emissions in China will continue to increase due to the rapid growth of industrialization and urbanization. Therefore, vigorously developing the high–tech industry becomes an inevitable choice to reduce CO{sub 2} emissions at the moment or in the future. However, ignoring the existing nonlinear links between economic variables, most scholars use traditional linear models to explore the impact of the high–tech industry on CO{sub 2} emissions from an aggregate perspective. Few studies have focused on nonlinear relationships and regional differences in China. Based on panel data of 1998–2014, this study uses the nonparametric additive regression model to explore the nonlinear effect of the high–tech industry from a regional perspective. The estimated results show that the residual sum of squares (SSR) of the nonparametric additive regression model in the eastern, central and western regions are 0.693, 0.054 and 0.085 respectively, which are much less those that of the traditional linear regression model (3.158, 4.227 and 7.196). This verifies that the nonparametric additive regression model has a better fitting effect. Specifically, the high–tech industry produces an inverted “U–shaped” nonlinear impact on CO{sub 2} emissions in the eastern region, but a positive “U–shaped” nonlinear effect in the central and western regions. Therefore, the nonlinear impact of the high–tech industry on CO{sub 2} emissions in the three regions should be given adequate attention in developing effective abatement policies. - Highlights: • The nonlinear effect of the high–tech industry on CO{sub 2} emissions was investigated. • The high–tech industry yields an inverted “U–shaped” effect in the eastern region. • The high–tech industry has a positive “U–shaped” nonlinear effect in other regions. • The linear impact

  13. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses.

    Science.gov (United States)

    Zhang, Linlin; Guindani, Michele; Versace, Francesco; Vannucci, Marina

    2014-07-15

    In this paper we present a novel wavelet-based Bayesian nonparametric regression model for the analysis of functional magnetic resonance imaging (fMRI) data. Our goal is to provide a joint analytical framework that allows to detect regions of the brain which exhibit neuronal activity in response to a stimulus and, simultaneously, infer the association, or clustering, of spatially remote voxels that exhibit fMRI time series with similar characteristics. We start by modeling the data with a hemodynamic response function (HRF) with a voxel-dependent shape parameter. We detect regions of the brain activated in response to a given stimulus by using mixture priors with a spike at zero on the coefficients of the regression model. We account for the complex spatial correlation structure of the brain by using a Markov random field (MRF) prior on the parameters guiding the selection of the activated voxels, therefore capturing correlation among nearby voxels. In order to infer association of the voxel time courses, we assume correlated errors, in particular long memory, and exploit the whitening properties of discrete wavelet transforms. Furthermore, we achieve clustering of the voxels by imposing a Dirichlet process (DP) prior on the parameters of the long memory process. For inference, we use Markov Chain Monte Carlo (MCMC) sampling techniques that combine Metropolis-Hastings schemes employed in Bayesian variable selection with sampling algorithms for nonparametric DP models. We explore the performance of the proposed model on simulated data, with both block- and event-related design, and on real fMRI data. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Non-parametric early seizure detection in an animal model of temporal lobe epilepsy

    Science.gov (United States)

    Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.

    2008-03-01

    The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.

  15. Frontier constellations

    DEFF Research Database (Denmark)

    Eilenberg, Michael

    2014-01-01

    expansion, population resettlement and securitization, and the confluence of these dynamic processes creates special frontier constellations. Through the case of the Indonesian-Malaysian borderlands, I explore how processes of frontier colonization through agricultural expansion have been a recurrent...

  16. Frontier spaces

    DEFF Research Database (Denmark)

    Rasmussen, Mattias Borg; Lund, Christian

    2018-01-01

    The global expansion of markets produces frontiers of contestation over the definition and control of resources. In a frontier context, new patterns of resource exploration, extraction, and commodification create new territories. A recently published collection (Rasmussen and Lund 2018) explores...

  17. Uncertainty in decision models analyzing cost-effectiveness : The joint distribution of incremental costs and effectiveness evaluated with a nonparametric bootstrap method

    NARCIS (Netherlands)

    Hunink, Maria; Bult, J.R.; De Vries, J; Weinstein, MC

    1998-01-01

    Purpose. To illustrate the use of a nonparametric bootstrap method in the evaluation of uncertainty in decision models analyzing cost-effectiveness. Methods. The authors reevaluated a previously published cost-effectiveness analysis that used a Markov model comparing initial percutaneous

  18. The role of analytical models: Issues and frontiers

    Energy Technology Data Exchange (ETDEWEB)

    Blair, P.

    1991-03-01

    A number of modeling attempts to analyze the implications of increasing competition in the electric power industry appeared in the early 1970s and occasionally throughout the early 1980s. Most of these of these analyses, however, considered only modest mechanisms to facilitate increased bulk power transactions between utility systems. More fundamental changes in market structure, such as the existence of independent power producers or wheeling transactions between customers and utility producers, were not considered. More recently in the course of the policy debate over increasing competition, a number of models have been used to analyze altemative scenarios of industry structure and regulation. In this Energy Modeling Forum (EMF) exercise, we attempted to challenge existing modeling frameworks beyond their original design capabilities. We tried to interpret altemative scenarios or other means of increasing competition in the electric power industry in the terms of existing modeling frameworks, to gain perspective using such models on how the different market players would interact, and to predict how electricity prices and other indicators of industry behavior might evolve under the altemative scenarios.

  19. The role of analytical models: Issues and frontiers

    International Nuclear Information System (INIS)

    Blair, P.

    1991-03-01

    A number of modeling attempts to analyze the implications of increasing competition in the electric power industry appeared in the early 1970s and occasionally throughout the early 1980s. Most of these of these analyses, however, considered only modest mechanisms to facilitate increased bulk power transactions between utility systems. More fundamental changes in market structure, such as the existence of independent power producers or wheeling transactions between customers and utility producers, were not considered. More recently in the course of the policy debate over increasing competition, a number of models have been used to analyze altemative scenarios of industry structure and regulation. In this Energy Modeling Forum (EMF) exercise, we attempted to challenge existing modeling frameworks beyond their original design capabilities. We tried to interpret altemative scenarios or other means of increasing competition in the electric power industry in the terms of existing modeling frameworks, to gain perspective using such models on how the different market players would interact, and to predict how electricity prices and other indicators of industry behavior might evolve under the altemative scenarios

  20. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  1. A contingency table approach to nonparametric testing

    CERN Document Server

    Rayner, JCW

    2000-01-01

    Most texts on nonparametric techniques concentrate on location and linear-linear (correlation) tests, with less emphasis on dispersion effects and linear-quadratic tests. Tests for higher moment effects are virtually ignored. Using a fresh approach, A Contingency Table Approach to Nonparametric Testing unifies and extends the popular, standard tests by linking them to tests based on models for data that can be presented in contingency tables.This approach unifies popular nonparametric statistical inference and makes the traditional, most commonly performed nonparametric analyses much more comp

  2. Exploring gravitational lensing model variations in the Frontier Fields galaxy clusters

    Science.gov (United States)

    Harris James, Nicholas John; Raney, Catie; Brennan, Sean; Keeton, Charles

    2018-01-01

    Multiple groups have been working on modeling the mass distributions of the six lensing galaxy clusters in the Hubble Space Telescope Frontier Fields data set. The magnification maps produced from these mass models will be important for the future study of the lensed background galaxies, but there exists significant variation in the different groups’ models and magnification maps. We explore the use of two-dimensional histograms as a tool for visualizing these magnification map variations. Using a number of simple, one- or two-halo singular isothermal sphere models, we explore the features that are produced in 2D histogram model comparisons when parameters such as halo mass, ellipticity, and location are allowed to vary. Our analysis demonstrates the potential of 2D histograms as a means of observing the full range of differences between the Frontier Fields groups’ models.This work has been supported by funding from National Science Foundation grants PHY-1560077 and AST-1211385, and from the Space Telescope Science Institute.

  3. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  4. Observed and unobserved heterogeneity in stochastic frontier models: An application to the electricity distribution industry

    International Nuclear Information System (INIS)

    Kopsakangas-Savolainen, Maria; Svento, Rauli

    2011-01-01

    In this study we combine different possibilities to model firm level heterogeneity in stochastic frontier analysis. We show that both observed and unobserved heterogeneities cause serious biases in inefficiency results. Modelling observed and unobserved heterogeneities treat individual firms in different ways and even though the expected mean inefficiency scores in both cases diminish the firm level efficiency rank orders turn out to be very different. The best fit with the data is obtained by modelling unobserved heterogeneity through randomizing frontier parameters and at the same time explicitly modelling the observed heterogeneity into the inefficiency distribution. These results are obtained by using data from Finnish electricity distribution utilities and the results are relevant in relation to electricity distribution pricing and regulation. -- Research Highlights: → We show that both observed and unobserved heterogeneities of firms cause biases in inefficiency results. → Different ways of accounting firm level heterogeneity end up with very different rank orders of firms. → The model which combines the characteristics of unobserved and observed heterogeneity fits the data best.

  5. New Frontiers on Seismic Modeling of Masonry Structures

    Directory of Open Access Journals (Sweden)

    Salvatore Caddemi

    2017-07-01

    Full Text Available An accurate evaluation of the non-linear behavior of masonry structural elements in existing buildings still represents a complex issue that rigorously requires non-linear finite element strategies difficult to apply to real large structures. Nevertheless, for the static and seismic assessment of existing structures, involving the contribution of masonry materials, engineers need reliable and efficient numerical tools, whose complexity and computational demand should be suitable for practical purposes. For these reasons, the formulation and the validation of simplified numerical strategies represent a very important issue in masonry computational research. In this paper, an innovative macroelement approach, developed by the authors in the last decade, is presented. The proposed macroelement formulation is based on different, plane and spatial, macroelements for the simulation of both the in-plane and out-of-plane behavior of masonry structures also in presence of masonry elements with curved geometry. The mechanical response of the adopted macroelement is governed by non-linear zero-thickness interfaces, whose calibration follows a straightforward fiber discretization, and the non-linear internal shear deformability is ruled by equivalence with a corresponding geometrically consistent homogenized medium. The approach can be considered as “parsimonious” since the kinematics of the adopted elements is controlled by very few degrees of freedom, if compared to a corresponding discretization performed by using non-linear finite element method strategies. This innovative discrete element strategy has been implemented in two user-oriented software codes 3DMacro (Caliò et al., 2012b and HiStrA (Historical Structures Analysis (Caliò et al., 2015, which simplify the modeling of buildings and historical structures by means of several wizard generation tools and input/output facilities. The proposed approach, that represents a powerful tool for the

  6. Stochastic Production Frontier Models to Explore Constraints on Household Travel Expenditures Considering Household Income Classes

    Directory of Open Access Journals (Sweden)

    Sofyan M. Saleh

    2016-04-01

    Full Text Available This paper explores the variation of household travel expenditure frontiers (HTEFs prior to CC reform in Jakarta. This study incorporates the variation of household income classes into the modeling of HTEFs and investigates the degree to which various determinants influence levels of HTEF. The HTEF is defined as an unseen maximum (capacity amount of money that a certain income class is willing to dedicate to their travel. A stochastic production frontier is applied to model and explore upper bound household travel expenditure (HTE. Using a comprehensive household travel survey (HTS in Jakarta in 2004, the observed HTE spending in a month is treated as an exogenous variable. The estimation results obtained using three proposed models, for low, medium and high income classes, show that HTEFs are significantly associated with life stage structure attributes, socio-demographics and life environment factors such as professional activity engagements, which is disclosed to be varied across income classes. Finding further reveals that considerable differences in average of HTEFs across models. This finding calls for the formulation of policies that consider the needs to be addressed for low and medium income groups in order to promote more equity policy thereby leading to more acceptable CC reform.

  7. Predicting Lung Radiotherapy-Induced Pneumonitis Using a Model Combining Parametric Lyman Probit With Nonparametric Decision Trees

    International Nuclear Information System (INIS)

    Das, Shiva K.; Zhou Sumin; Zhang, Junan; Yin, F.-F.; Dewhirst, Mark W.; Marks, Lawrence B.

    2007-01-01

    Purpose: To develop and test a model to predict for lung radiation-induced Grade 2+ pneumonitis. Methods and Materials: The model was built from a database of 234 lung cancer patients treated with radiotherapy (RT), of whom 43 were diagnosed with pneumonitis. The model augmented the predictive capability of the parametric dose-based Lyman normal tissue complication probability (LNTCP) metric by combining it with weighted nonparametric decision trees that use dose and nondose inputs. The decision trees were sequentially added to the model using a 'boosting' process that enhances the accuracy of prediction. The model's predictive capability was estimated by 10-fold cross-validation. To facilitate dissemination, the cross-validation result was used to extract a simplified approximation to the complicated model architecture created by boosting. Application of the simplified model is demonstrated in two example cases. Results: The area under the model receiver operating characteristics curve for cross-validation was 0.72, a significant improvement over the LNTCP area of 0.63 (p = 0.005). The simplified model used the following variables to output a measure of injury: LNTCP, gender, histologic type, chemotherapy schedule, and treatment schedule. For a given patient RT plan, injury prediction was highest for the combination of pre-RT chemotherapy, once-daily treatment, female gender and lowest for the combination of no pre-RT chemotherapy and nonsquamous cell histologic type. Application of the simplified model to the example cases revealed that injury prediction for a given treatment plan can range from very low to very high, depending on the settings of the nondose variables. Conclusions: Radiation pneumonitis prediction was significantly enhanced by decision trees that added the influence of nondose factors to the LNTCP formulation

  8. THE IMPACT OF COMPETITIVENESS ON TRADE EFFICIENCY: THE ASIAN EXPERIENCE BY USING THE STOCHASTIC FRONTIER GRAVITY MODEL

    Directory of Open Access Journals (Sweden)

    Memduh Alper Demir

    2017-12-01

    Full Text Available The purpose of this study is to examine the bilateral machinery and transport equipment trade efficiency of selected fourteen Asian countries by applying stochastic frontier gravity model. These selected countries have the top machinery and transport equipment trade (both export and import volumes in Asia. The model we use includes variables such as income, market size of trading partners, distance, common culture, common border, common language and global economic crisis similar to earlier studies using the stochastic frontier gravity models. Our work, however, includes an extra variable called normalized revealed comparative advantage (NRCA index additionally. The NRCA index is comparable across commodity, country and time. Thus, the NRCA index is calculated and then included in our stochastic frontier gravity model to see the impact of competitiveness (here measured by the NRCA index on the efficiency of trade.

  9. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  10. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  11. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  12. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  13. Pharmacokinetic modelling of intravenous tobramycin in adolescent and adult patients with cystic fibrosis using the nonparametric expectation maximization (NPEM) algorithm.

    Science.gov (United States)

    Touw, D J; Vinks, A A; Neef, C

    1997-06-01

    The availability of personal computer programs for individualizing drug dosage regimens has stimulated the interest in modelling population pharmacokinetics. Data from 82 adolescent and adult patients with cystic fibrosis (CF) who were treated with intravenous tobramycin because of an exacerbation of their pulmonary infection were analysed with a non-parametric expectation maximization (NPEM) algorithm. This algorithm estimates the entire discrete joint probability density of the pharmacokinetic parameters. It also provides traditional parametric statistics such as the means, standard deviation, median, covariances and correlations among the various parameters. It also provides graphic-2- and 3-dimensional representations of the marginal densities of the parameters investigated. Several models for intravenous tobramycin in adolescent and adult patients with CF were compared. Covariates were total body weight (for the volume of distribution) and creatinine clearance (for the total body clearance and elimination rate). Because of lack of data on patients with poor renal function, restricted models with non-renal clearance and the non-renal elimination rate constant fixed at literature values of 0.15 L/h and 0.01 h-1 were also included. In this population, intravenous tobramycin could be best described by median (+/-dispersion factor) volume of distribution per unit of total body weight of 0.28 +/- 0.05 L/kg, elimination rate constant of 0.25 +/- 0.10 h-1 and elimination rate constant per unit of creatinine clearance of 0.0008 +/- 0.0009 h-1/(ml/min/1.73 m2). Analysis of populations of increasing size showed that using a restricted model with a non-renal elimination rate constant fixed at 0.01 h-1, a model based on a population of only 10 to 20 patients, contained parameter values similar to those of the entire population and, using the full model, a larger population (at least 40 patients) was needed.

  14. Non-parametric model selection for subject-specific topological organization of resting-state functional connectivity.

    Science.gov (United States)

    Ferrarini, Luca; Veer, Ilya M; van Lew, Baldur; Oei, Nicole Y L; van Buchem, Mark A; Reiber, Johan H C; Rombouts, Serge A R B; Milles, J

    2011-06-01

    In recent years, graph theory has been successfully applied to study functional and anatomical connectivity networks in the human brain. Most of these networks have shown small-world topological characteristics: high efficiency in long distance communication between nodes, combined with highly interconnected local clusters of nodes. Moreover, functional studies performed at high resolutions have presented convincing evidence that resting-state functional connectivity networks exhibits (exponentially truncated) scale-free behavior. Such evidence, however, was mostly presented qualitatively, in terms of linear regressions of the degree distributions on log-log plots. Even when quantitative measures were given, these were usually limited to the r(2) correlation coefficient. However, the r(2) statistic is not an optimal estimator of explained variance, when dealing with (truncated) power-law models. Recent developments in statistics have introduced new non-parametric approaches, based on the Kolmogorov-Smirnov test, for the problem of model selection. In this work, we have built on this idea to statistically tackle the issue of model selection for the degree distribution of functional connectivity at rest. The analysis, performed at voxel level and in a subject-specific fashion, confirmed the superiority of a truncated power-law model, showing high consistency across subjects. Moreover, the most highly connected voxels were found to be consistently part of the default mode network. Our results provide statistically sound support to the evidence previously presented in literature for a truncated power-law model of resting-state functional connectivity. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  16. Non-parametric Bayesian models of response function in dynamic image sequences

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav

    2016-01-01

    Roč. 151, č. 1 (2016), s. 90-100 ISSN 1077-3142 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Response function * Blind source separation * Dynamic medical imaging * Probabilistic models * Bayesian methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.498, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0456983.pdf

  17. Modeling intensive longitudinal data with mixtures of nonparametric trajectories and time-varying effects.

    Science.gov (United States)

    Dziak, John J; Li, Runze; Tan, Xianming; Shiffman, Saul; Shiyko, Mariya P

    2015-12-01

    Behavioral scientists increasingly collect intensive longitudinal data (ILD), in which phenomena are measured at high frequency and in real time. In many such studies, it is of interest to describe the pattern of change over time in important variables as well as the changing nature of the relationship between variables. Individuals' trajectories on variables of interest may be far from linear, and the predictive relationship between variables of interest and related covariates may also change over time in a nonlinear way. Time-varying effect models (TVEMs; see Tan, Shiyko, Li, Li, & Dierker, 2012) address these needs by allowing regression coefficients to be smooth, nonlinear functions of time rather than constants. However, it is possible that not only observed covariates but also unknown, latent variables may be related to the outcome. That is, regression coefficients may change over time and also vary for different kinds of individuals. Therefore, we describe a finite mixture version of TVEM for situations in which the population is heterogeneous and in which a single trajectory would conceal important, interindividual differences. This extended approach, MixTVEM, combines finite mixture modeling with non- or semiparametric regression modeling, to describe a complex pattern of change over time for distinct latent classes of individuals. The usefulness of the method is demonstrated in an empirical example from a smoking cessation study. We provide a versatile SAS macro and R function for fitting MixTVEMs. (c) 2015 APA, all rights reserved).

  18. Essays on nonparametric econometrics of stochastic volatility

    NARCIS (Netherlands)

    Zu, Y.

    2012-01-01

    Volatility is a concept that describes the variation of financial returns. Measuring and modelling volatility dynamics is an important aspect of financial econometrics. This thesis is concerned with nonparametric approaches to volatility measurement and volatility model validation.

  19. A framework with nonlinear system model and nonparametric noise for gas turbine degradation state estimation

    International Nuclear Information System (INIS)

    Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying

    2015-01-01

    Modern health management approaches for gas turbine engines (GTEs) aim to precisely estimate the health state of the GTE components to optimize maintenance decisions with respect to both economy and safety. In this research, we propose an advanced framework to identify the most likely degradation state of the turbine section in a GTE for prognostics and health management (PHM) applications. A novel nonlinear thermodynamic model is used to predict the performance parameters of the GTE given the measurements. The ratio between real efficiency of the GTE and simulated efficiency in the newly installed condition is defined as the health indicator and provided at each sequence. The symptom of nonrecoverable degradations in the turbine section, i.e. loss of turbine efficiency, is assumed to be the internal degradation state. A regularized auxiliary particle filter (RAPF) is developed to sequentially estimate the internal degradation state in nonuniform time sequences upon receiving sets of new measurements. The effectiveness of the technique is examined using the operating data over an entire time-between-overhaul cycle of a simple-cycle industrial GTE. The results clearly show the trend of degradation in the turbine section and the occasional fluctuations, which are well supported by the service history of the GTE. The research also suggests the efficacy of the proposed technique to monitor the health state of the turbine section of a GTE by implementing model-based PHM without the need for additional instrumentation. (paper)

  20. Nonparametric modeling and analysis of association between Huntington's disease onset and CAG repeats.

    Science.gov (United States)

    Ma, Yanyuan; Wang, Yuanjia

    2014-04-15

    Huntington's disease (HD) is a neurodegenerative disorder with a dominant genetic mode of inheritance caused by an expansion of CAG repeats on chromosome 4. Typically, a longer sequence of CAG repeat length is associated with increased risk of experiencing earlier onset of HD. Previous studies of the association between HD onset age and CAG length have favored a logistic model, where the CAG repeat length enters the mean and variance components of the logistic model in a complex exponential-linear form. To relax the parametric assumption of the exponential-linear association to the true HD onset distribution, we propose to leave both mean and variance functions of the CAG repeat length unspecified and perform semiparametric estimation in this context through a local kernel and backfitting procedure. Motivated by including family history of HD information available in the family members of participants in the Cooperative Huntington's Observational Research Trial (COHORT), we develop the methodology in the context of mixture data, where some subjects have a positive probability of being risk free. We also allow censoring on the age at onset of disease and accommodate covariates other than the CAG length. We study the theoretical properties of the proposed estimator and derive its asymptotic distribution. Finally, we apply the proposed methods to the COHORT data to estimate the HD onset distribution using a group of study participants and the disease family history information available on their family members. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Nonparametric evaluation of quantitative traits in population-based association studies when the genetic model is unknown.

    Science.gov (United States)

    Konietschke, Frank; Libiger, Ondrej; Hothorn, Ludwig A

    2012-01-01

    Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible

  2. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    Science.gov (United States)

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  3. Hybrid elementary flux analysis/nonparametric modeling: application for bioprocess control

    Directory of Open Access Journals (Sweden)

    Alves Paula M

    2007-01-01

    Full Text Available Abstract Background The progress in the "-omic" sciences has allowed a deeper knowledge on many biological systems with industrial interest. This knowledge is still rarely used for advanced bioprocess monitoring and control at the bioreactor level. In this work, a bioprocess control method is presented, which is designed on the basis of the metabolic network of the organism under consideration. The bioprocess dynamics are formulated using hybrid rigorous/data driven systems and its inherent structure is defined by the metabolism elementary modes. Results The metabolic network of the system under study is decomposed into elementary modes (EMs, which are the simplest paths able to operate coherently in steady-state. A reduced reaction mechanism in the form of simplified reactions connecting substrates with end-products is obtained. A dynamical hybrid system integrating material balance equations, EMs reactions stoichiometry and kinetics was formulated. EMs kinetics were defined as the product of two terms: a mechanistic/empirical known term and an unknown term that must be identified from data, in a process optimisation perspective. This approach allows the quantification of fluxes carried by individual elementary modes which is of great help to identify dominant pathways as a function of environmental conditions. The methodology was employed to analyse experimental data of recombinant Baby Hamster Kidney (BHK-21A cultures producing a recombinant fusion glycoprotein. The identified EMs kinetics demonstrated typical glucose and glutamine metabolic responses during cell growth and IgG1-IL2 synthesis. Finally, an online optimisation study was conducted in which the optimal feeding strategies of glucose and glutamine were calculated after re-estimation of model parameters at each sampling time. An improvement in the final product concentration was obtained as a result of this online optimisation. Conclusion The main contribution of this work is a

  4. On Cooper's Nonparametric Test.

    Science.gov (United States)

    Schmeidler, James

    1978-01-01

    The basic assumption of Cooper's nonparametric test for trend (EJ 125 069) is questioned. It is contended that the proper assumption alters the distribution of the statistic and reduces its usefulness. (JKS)

  5. Examining the nonparametric effect of drivers' age in rear-end accidents through an additive logistic regression model.

    Science.gov (United States)

    Ma, Lu; Yan, Xuedong

    2014-06-01

    This study seeks to inspect the nonparametric characteristics connecting the age of the driver to the relative risk of being an at-fault vehicle, in order to discover a more precise and smooth pattern of age impact, which has commonly been neglected in past studies. Records of drivers in two-vehicle rear-end collisions are selected from the general estimates system (GES) 2011 dataset. These extracted observations in fact constitute inherently matched driver pairs under certain matching variables including weather conditions, pavement conditions and road geometry design characteristics that are shared by pairs of drivers in rear-end accidents. The introduced data structure is able to guarantee that the variance of the response variable will not depend on the matching variables and hence provides a high power of statistical modeling. The estimation results exhibit a smooth cubic spline function for examining the nonlinear relationship between the age of the driver and the log odds of being at fault in a rear-end accident. The results are presented with respect to the main effect of age, the interaction effect between age and sex, and the effects of age under different scenarios of pre-crash actions by the leading vehicle. Compared to the conventional specification in which age is categorized into several predefined groups, the proposed method is more flexible and able to produce quantitatively explicit results. First, it confirms the U-shaped pattern of the age effect, and further shows that the risks of young and old drivers change rapidly with age. Second, the interaction effects between age and sex show that female and male drivers behave differently in rear-end accidents. Third, it is found that the pattern of age impact varies according to the type of pre-crash actions exhibited by the leading vehicle. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Managerial performance and cost efficiency of Japanese local public hospitals: a latent class stochastic frontier model.

    Science.gov (United States)

    Besstremyannaya, Galina

    2011-09-01

    The paper explores the link between managerial performance and cost efficiency of 617 Japanese general local public hospitals in 1999-2007. Treating managerial performance as unobservable heterogeneity, the paper employs a panel data stochastic cost frontier model with latent classes. Financial parameters associated with better managerial performance are found to be positively significant in explaining the probability of belonging to the more efficient latent class. The analysis of latent class membership was consistent with the conjecture that unobservable technological heterogeneity reflected in the existence of the latent classes is related to managerial performance. The findings may support the cause for raising efficiency of Japanese local public hospitals by enhancing the quality of management. Copyright © 2011 John Wiley & Sons, Ltd.

  7. The technology gap and efficiency measure in WEC countries: Application of the hybrid meta frontier model

    International Nuclear Information System (INIS)

    Chiu, Yung-Ho; Lee, Jen-Hui; Lu, Ching-Cheng; Shyu, Ming-Kuang; Luo, Zhengying

    2012-01-01

    This study develops the hybrid meta frontier DEA model for which inputs are distinguished into radial inputs that change proportionally and non-radial inputs that change non-proportionally, in order to measure the technical efficiency and technology gap ratios (TGR) of four different regions: Asia, Africa, America, and Europe. This paper selects 87 countries that are members of the World Energy Council from 2005 to 2007. The input variables are industry and population, while the output variances are gross domestic product (GDP) and the amount of fossil-fuel CO 2 emissions. The result shows that countries’ efficiency ranking among their own region presents more implied volatility. In view of the Technology Gap Ratio, Europe is the most efficient of any region, but during the same period, Asia has a lower efficiency than other regions. Finally, regions with higher industry (or GDP) might not have higher efficiency from 2005 to 2007. And higher CO 2 emissions or population also might not mean lower efficiency for other regions. In addition, Brazil is not OECD member, but it is higher efficiency than other OECD members in emerging countries case. OECD countries are better efficiency than non-OECD countries and Europe is higher than Asia to control CO 2 emissions. If non-OECD countries or Asia countries could reach the best efficiency score, they should try to control CO 2 emissions. - Highlights: ► The new meta frontier Model for evaluating the efficiency and technology gap ratios. ► Higher CO 2 emissions might not lower efficiency than any other regions, like Europe. ► Asia’s output and CO 2 emissions simultaneously increased and lower of its efficiency. ► Non-OECD or Asia countries should control CO 2 emissions to reach best efficiency score.

  8. A non-parametric mixture model for genome-enabled prediction of genetic value for a quantitative trait.

    Science.gov (United States)

    Gianola, Daniel; Wu, Xiao-Lin; Manfredi, Eduardo; Simianer, Henner

    2010-10-01

    A Bayesian nonparametric form of regression based on Dirichlet process priors is adapted to the analysis of quantitative traits possibly affected by cryptic forms of gene action, and to the context of SNP-assisted genomic selection, where the main objective is to predict a genomic signal on phenotype. The procedure clusters unknown genotypes into groups with distinct genetic values, but in a setting in which the number of clusters is unknown a priori, so that standard methods for finite mixture analysis do not work. The central assumption is that genetic effects follow an unknown distribution with some "baseline" family, which is a normal process in the cases considered here. A Bayesian analysis based on the Gibbs sampler produces estimates of the number of clusters, posterior means of genetic effects, a measure of credibility in the baseline distribution, as well as estimates of parameters of the latter. The procedure is illustrated with a simulation representing two populations. In the first one, there are 3 unknown QTL, with additive, dominance and epistatic effects; in the second, there are 10 QTL with additive, dominance and additive × additive epistatic effects. In the two populations, baseline parameters are inferred correctly. The Dirichlet process model infers the number of unique genetic values correctly in the first population, but it produces an understatement in the second one; here, the true number of clusters is over 900, and the model gives a posterior mean estimate of about 140, probably because more replication of genotypes is needed for correct inference. The impact on inferences of the prior distribution of a key parameter (M), and of the extent of replication, was examined via an analysis of mean body weight in 192 paternal half-sib families of broiler chickens, where each sire was genotyped for nearly 7,000 SNPs. In this small sample, it was found that inference about the number of clusters was affected by the prior distribution of M. For a

  9. Nonparametric functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yang, Jie; Wu, Rongling; Casella, George

    2009-03-01

    Functional mapping is a useful tool for mapping quantitative trait loci (QTL) that control dynamic traits. It incorporates mathematical aspects of biological processes into the mixture model-based likelihood setting for QTL mapping, thus increasing the power of QTL detection and the precision of parameter estimation. However, in many situations there is no obvious functional form and, in such cases, this strategy will not be optimal. Here we propose to use nonparametric function estimation, typically implemented with B-splines, to estimate the underlying functional form of phenotypic trajectories, and then construct a nonparametric test to find evidence of existing QTL. Using the representation of a nonparametric regression as a mixed model, the final test statistic is a likelihood ratio test. We consider two types of genetic maps: dense maps and general maps, and the power of nonparametric functional mapping is investigated through simulation studies and demonstrated by examples.

  10. Theory of nonparametric tests

    CERN Document Server

    Dickhaus, Thorsten

    2018-01-01

    This textbook provides a self-contained presentation of the main concepts and methods of nonparametric statistical testing, with a particular focus on the theoretical foundations of goodness-of-fit tests, rank tests, resampling tests, and projection tests. The substitution principle is employed as a unified approach to the nonparametric test problems discussed. In addition to mathematical theory, it also includes numerous examples and computer implementations. The book is intended for advanced undergraduate, graduate, and postdoc students as well as young researchers. Readers should be familiar with the basic concepts of mathematical statistics typically covered in introductory statistics courses.

  11. Estimating cost efficiency of Turkish commercial banks under unobserved heterogeneity with stochastic frontier models

    Directory of Open Access Journals (Sweden)

    Hakan Gunes

    2016-12-01

    Full Text Available This study aims to investigate the cost efficiency of Turkish commercial banks over the restructuring period of the Turkish banking system, which coincides with the 2008 financial global crisis and the 2010 European sovereign debt crisis. To this end, within the stochastic frontier framework, we employ true fixed effects model, where the unobserved bank heterogeneity is integrated in the inefficiency distribution at a mean level. To select the cost function with the most appropriate inefficiency correlates, we first adopt a search algorithm and then utilize the model averaging approach to verify that our results are not exposed to model selection bias. Overall, our empirical results reveal that cost efficiencies of Turkish banks have improved over time, with the effects of the 2008 and 2010 crises remaining rather limited. Furthermore, not only the cost efficiency scores but also impacts of the crises on those scores appear to vary with regard to bank size and ownership structure, in accordance with much of the existing literature.

  12. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  13. Data analysis and approximate models model choice, location-scale, analysis of variance, nonparametric regression and image analysis

    CERN Document Server

    Davies, Patrick Laurie

    2014-01-01

    Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...

  14. Estimation of the lifetime distribution of mechatronic systems in the presence of a covariate: A comparison among parametric, semiparametric and nonparametric models

    International Nuclear Information System (INIS)

    Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang

    2015-01-01

    In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed

  15. Cost efficiency of Japanese steam power generation companies: A Bayesian comparison of random and fixed frontier models

    Energy Technology Data Exchange (ETDEWEB)

    Assaf, A. George [Isenberg School of Management, University of Massachusetts-Amherst, 90 Campus Center Way, Amherst 01002 (United States); Barros, Carlos Pestana [Instituto Superior de Economia e Gestao, Technical University of Lisbon, Rua Miguel Lupi, 20, 1249-078 Lisbon (Portugal); Managi, Shunsuke [Graduate School of Environmental Studies, Tohoku University, 6-6-20 Aramaki-Aza Aoba, Aoba-Ku, Sendai 980-8579 (Japan)

    2011-04-15

    This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO{sub 2} emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results. (author)

  16. Frontier commodification

    DEFF Research Database (Denmark)

    Bennike, Rune Bolding

    2017-01-01

    In the contemporary global imagination, Darjeeling typically figures on two accounts: as a unique tourism site replete with colonial heritage and picturesque nature, and as the productive origin for some of the world's most exclusive teas. In this commodified and consumable form, Darjeeling is part...... of material and representational interventions, I uncover the particular assemblage of government and capital that enabled this transformation and highlight its potential resonances with contemporary cases of frontier commodification in South Asia and beyond....

  17. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  18. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  19. Nonparametric predictive inference in statistical process control

    NARCIS (Netherlands)

    Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.

    2004-01-01

    Statistical process control (SPC) is used to decide when to stop a process as confidence in the quality of the next item(s) is low. Information to specify a parametric model is not always available, and as SPC is of a predictive nature, we present a control chart developed using nonparametric

  20. A Bayesian Nonparametric Approach to Factor Analysis

    DEFF Research Database (Denmark)

    Piatek, Rémi; Papaspiliopoulos, Omiros

    2018-01-01

    This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...

  1. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  2. A NONPARAMETRIC HYPOTHESIS TEST VIA THE BOOTSTRAP RESAMPLING

    OpenAIRE

    Temel, Tugrul T.

    2001-01-01

    This paper adapts an already existing nonparametric hypothesis test to the bootstrap framework. The test utilizes the nonparametric kernel regression method to estimate a measure of distance between the models stated under the null hypothesis. The bootstraped version of the test allows to approximate errors involved in the asymptotic hypothesis test. The paper also develops a Mathematica Code for the test algorithm.

  3. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  4. Mass Modeling of Frontier Fields Cluster MACS J1149.5+2223 Using Strong and Weak Lensing

    Science.gov (United States)

    Finney, Emily Quinn; Bradač, Maruša; Huang, Kuang-Han; Hoag, Austin; Morishita, Takahiro; Schrabback, Tim; Treu, Tommaso; Borello Schmidt, Kasper; Lemaux, Brian C.; Wang, Xin; Mason, Charlotte

    2018-05-01

    We present a gravitational-lensing model of MACS J1149.5+2223 using ultra-deep Hubble Frontier Fields imaging data and spectroscopic redshifts from HST grism and Very Large Telescope (VLT)/MUSE spectroscopic data. We create total mass maps using 38 multiple images (13 sources) and 608 weak-lensing galaxies, as well as 100 multiple images of 31 star-forming regions in the galaxy that hosts supernova Refsdal. We find good agreement with a range of recent models within the HST field of view. We present a map of the ratio of projected stellar mass to total mass (f ⋆) and find that the stellar mass fraction for this cluster peaks on the primary BCG. Averaging within a radius of 0.3 Mpc, we obtain a value of ={0.012}-0.003+0.004, consistent with other recent results for this ratio in cluster environments, though with a large global error (up to δf ⋆ = 0.005) primarily due to the choice of IMF. We compare values of f ⋆ and measures of star formation efficiency for this cluster to other Hubble Frontier Fields clusters studied in the literature, finding that MACS1149 has a higher stellar mass fraction than these other clusters but a star formation efficiency typical of massive clusters.

  5. Bayesian Nonparametric Longitudinal Data Analysis.

    Science.gov (United States)

    Quintana, Fernando A; Johnson, Wesley O; Waetjen, Elaine; Gold, Ellen

    2016-01-01

    Practical Bayesian nonparametric methods have been developed across a wide variety of contexts. Here, we develop a novel statistical model that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures. AR structure is often specified through the use of a Gaussian process (GP) with covariance functions that allow longitudinal data to be more correlated if they are observed closer in time than if they are observed farther apart. We allow for AR structure by considering a broader class of models that incorporates a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. We are able to take advantage of modern Bayesian statistical methods in making full predictive inferences and about characteristics of longitudinal profiles and their differences across covariate combinations. We also take advantage of the generality of our model, which provides for estimation of a variety of covariance structures. We observe that models that fail to incorporate CS or AR structure can result in very poor estimation of a covariance or correlation matrix. In our illustration using hormone data observed on women through the menopausal transition, biology dictates the use of a generalized family of sigmoid functions as a model for time trends across subpopulation categories.

  6. 2nd Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Manteiga, Wenceslao; Romo, Juan

    2016-01-01

    This volume collects selected, peer-reviewed contributions from the 2nd Conference of the International Society for Nonparametric Statistics (ISNPS), held in Cádiz (Spain) between June 11–16 2014, and sponsored by the American Statistical Association, the Institute of Mathematical Statistics, the Bernoulli Society for Mathematical Statistics and Probability, the Journal of Nonparametric Statistics and Universidad Carlos III de Madrid. The 15 articles are a representative sample of the 336 contributed papers presented at the conference. They cover topics such as high-dimensional data modelling, inference for stochastic processes and for dependent data, nonparametric and goodness-of-fit testing, nonparametric curve estimation, object-oriented data analysis, and semiparametric inference. The aim of the ISNPS 2014 conference was to bring together recent advances and trends in several areas of nonparametric statistics in order to facilitate the exchange of research ideas, promote collaboration among researchers...

  7. A nonparametric random coefficient approach for life expectancy growth using a hierarchical mixture likelihood model with application to regional data from North Rhine-Westphalia (Germany).

    Science.gov (United States)

    Böhning, Dankmar; Karasek, Sarah; Terschüren, Claudia; Annuß, Rolf; Fehr, Rainer

    2013-03-09

    Life expectancy is of increasing prime interest for a variety of reasons. In many countries, life expectancy is growing linearly, without any indication of reaching a limit. The state of North Rhine-Westphalia (NRW) in Germany with its 54 districts is considered here where the above mentioned growth in life expectancy is occurring as well. However, there is also empirical evidence that life expectancy is not growing linearly at the same level for different regions. To explore this situation further a likelihood-based cluster analysis is suggested and performed. The modelling uses a nonparametric mixture approach for the latent random effect. Maximum likelihood estimates are determined by means of the EM algorithm and the number of components in the mixture model are found on the basis of the Bayesian Information Criterion. Regions are classified into the mixture components (clusters) using the maximum posterior allocation rule. For the data analyzed here, 7 components are found with a spatial concentration of lower life expectancy levels in a centre of NRW, formerly an enormous conglomerate of heavy industry, still the most densely populated area with Gelsenkirchen having the lowest level of life expectancy growth for both genders. The paper offers some explanations for this fact including demographic and socio-economic sources. This case study shows that life expectancy growth is widely linear, but it might occur on different levels.

  8. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo; Genton, Marc G.

    2013-01-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric

  9. Measuring economy-wide energy efficiency performance: A parametric frontier approach

    International Nuclear Information System (INIS)

    Zhou, P.; Ang, B.W.; Zhou, D.Q.

    2012-01-01

    This paper proposes a parametric frontier approach to estimating economy-wide energy efficiency performance from a production efficiency point of view. It uses the Shephard energy distance function to define an energy efficiency index and adopts the stochastic frontier analysis technique to estimate the index. A case study of measuring the economy-wide energy efficiency performance of a sample of OECD countries using the proposed approach is presented. It is found that the proposed parametric frontier approach has higher discriminating power in energy efficiency performance measurement compared to its nonparametric frontier counterparts.

  10. Nonparametric estimation of transition probabilities in the non-Markov illness-death model: A comparative study.

    Science.gov (United States)

    de Uña-Álvarez, Jacobo; Meira-Machado, Luís

    2015-06-01

    Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed. © 2015, The International Biometric Society.

  11. Non-parametric identification of multivariable systems : a local rational modeling approach with application to a vibration isolation benchmark

    NARCIS (Netherlands)

    Voorhoeve, R.J.; van der Maas, A.; Oomen, T.A.J.

    2018-01-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF

  12. Reconfiguring frontier spaces

    DEFF Research Database (Denmark)

    Rasmussen, Mattias Borg; Lund, Christian

    2018-01-01

    The expansion of capitalism produces contests over the definition and control of resources. On a global scale, new patterns of resource exploration, extraction, and commodification create new territories. This takes place within a dynamic of frontiers and territorialization. Frontier dynamics...

  13. Weak Disposability in Nonparametric Production Analysis with Undesirable Outputs

    NARCIS (Netherlands)

    Kuosmanen, T.K.

    2005-01-01

    Environmental Economics and Natural Resources Group at Wageningen University in The Netherlands Weak disposability of outputs means that firms can abate harmful emissions by decreasing the activity level. Modeling weak disposability in nonparametric production analysis has caused some confusion.

  14. Robustifying Bayesian nonparametric mixtures for count data.

    Science.gov (United States)

    Canale, Antonio; Prünster, Igor

    2017-03-01

    Our motivating application stems from surveys of natural populations and is characterized by large spatial heterogeneity in the counts, which makes parametric approaches to modeling local animal abundance too restrictive. We adopt a Bayesian nonparametric approach based on mixture models and innovate with respect to popular Dirichlet process mixture of Poisson kernels by increasing the model flexibility at the level both of the kernel and the nonparametric mixing measure. This allows to derive accurate and robust estimates of the distribution of local animal abundance and of the corresponding clusters. The application and a simulation study for different scenarios yield also some general methodological implications. Adding flexibility solely at the level of the mixing measure does not improve inferences, since its impact is severely limited by the rigidity of the Poisson kernel with considerable consequences in terms of bias. However, once a kernel more flexible than the Poisson is chosen, inferences can be robustified by choosing a prior more general than the Dirichlet process. Therefore, to improve the performance of Bayesian nonparametric mixtures for count data one has to enrich the model simultaneously at both levels, the kernel and the mixing measure. © 2016, The International Biometric Society.

  15. A Cobb Douglas stochastic frontier model on measuring domestic bank efficiency in Malaysia.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    Banking system plays an important role in the economic development of any country. Domestic banks, which are the main components of the banking system, have to be efficient; otherwise, they may create obstacle in the process of development in any economy. This study examines the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur Stock Exchange (KLSE) market over the period 2005-2010. A parametric approach, Stochastic Frontier Approach (SFA), is used in this analysis. The findings show that Malaysian domestic banks have exhibited an average overall efficiency of 94 percent, implying that sample banks have wasted an average of 6 percent of their inputs. Among the banks, RHBCAP is found to be highly efficient with a score of 0.986 and PBBANK is noted to have the lowest efficiency with a score of 0.918. The results also show that the level of efficiency has increased during the period of study, and that the technical efficiency effect has fluctuated considerably over time.

  16. Fundamental Symmetries of the Early Universe and the Precision Frontier

    International Nuclear Information System (INIS)

    Ramsey-Musolf, Michael J.

    2009-01-01

    The search for the next Standard Model of fundamental interactions is being carried out at two frontiers: the high energy frontier involving the Tevatron and Large Hadron Collider, and the high precision frontier where the focus is largely on low energy experiments. I discuss the unique and powerful window on new physics provided by the precision frontier and its complementarity to the information we hope to gain from present and future colliders.

  17. SPECIES-SPECIFIC FOREST VARIABLE ESTIMATION USING NON-PARAMETRIC MODELING OF MULTI-SPECTRAL PHOTOGRAMMETRIC POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    J. Bohlin

    2012-07-01

    Full Text Available The recent development in software for automatic photogrammetric processing of multispectral aerial imagery, and the growing nation-wide availability of Digital Elevation Model (DEM data, are about to revolutionize data capture for forest management planning in Scandinavia. Using only already available aerial imagery and ALS-assessed DEM data, raster estimates of the forest variables mean tree height, basal area, total stem volume, and species-specific stem volumes were produced and evaluated. The study was conducted at a coniferous hemi-boreal test site in southern Sweden (lat. 58° N, long. 13° E. Digital aerial images from the Zeiss/Intergraph Digital Mapping Camera system were used to produce 3D point-cloud data with spectral information. Metrics were calculated for 696 field plots (10 m radius from point-cloud data and used in k-MSN to estimate forest variables. For these stands, the tree height ranged from 1.4 to 33.0 m (18.1 m mean, stem volume from 0 to 829 m3 ha-1 (249 m3 ha-1 mean and basal area from 0 to 62.2 m2 ha-1 (26.1 m2 ha-1 mean, with mean stand size of 2.8 ha. Estimates made using digital aerial images corresponding to the standard acquisition of the Swedish National Land Survey (Lantmäteriet showed RMSEs (in percent of the surveyed stand mean of 7.5% for tree height, 11.4% for basal area, 13.2% for total stem volume, 90.6% for pine stem volume, 26.4 for spruce stem volume, and 72.6% for deciduous stem volume. The results imply that photogrammetric matching of digital aerial images has significant potential for operational use in forestry.

  18. Decision support using nonparametric statistics

    CERN Document Server

    Beatty, Warren

    2018-01-01

    This concise volume covers nonparametric statistics topics that most are most likely to be seen and used from a practical decision support perspective. While many degree programs require a course in parametric statistics, these methods are often inadequate for real-world decision making in business environments. Much of the data collected today by business executives (for example, customer satisfaction opinions) requires nonparametric statistics for valid analysis, and this book provides the reader with a set of tools that can be used to validly analyze all data, regardless of type. Through numerous examples and exercises, this book explains why nonparametric statistics will lead to better decisions and how they are used to reach a decision, with a wide array of business applications. Online resources include exercise data, spreadsheets, and solutions.

  19. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin

    2017-01-19

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  20. Testing discontinuities in nonparametric regression

    KAUST Repository

    Dai, Wenlin; Zhou, Yuejin; Tong, Tiejun

    2017-01-01

    In nonparametric regression, it is often needed to detect whether there are jump discontinuities in the mean function. In this paper, we revisit the difference-based method in [13 H.-G. Müller and U. Stadtmüller, Discontinuous versus smooth regression, Ann. Stat. 27 (1999), pp. 299–337. doi: 10.1214/aos/1018031100

  1. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.

  2. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  3. Tectonic heat flow modelling for basin maturation - Implications for frontier areas in the mediterranean

    NARCIS (Netherlands)

    Wees, J.D. van; Bonte, D.; Nelskamp, S.

    2009-01-01

    Basement heat flow is one of the most influential parameters on basin maturity. Although rapid progress has been made in the development of tectonic models capable of modelling the thermal consequences of basin formation, these models are hardly used in basin modelling. To better predict heat flows

  4. Nonparametric Inference for Periodic Sequences

    KAUST Repository

    Sun, Ying

    2012-02-01

    This article proposes a nonparametric method for estimating the period and values of a periodic sequence when the data are evenly spaced in time. The period is estimated by a "leave-out-one-cycle" version of cross-validation (CV) and complements the periodogram, a widely used tool for period estimation. The CV method is computationally simple and implicitly penalizes multiples of the smallest period, leading to a "virtually" consistent estimator of integer periods. This estimator is investigated both theoretically and by simulation.We also propose a nonparametric test of the null hypothesis that the data have constantmean against the alternative that the sequence of means is periodic. Finally, our methodology is demonstrated on three well-known time series: the sunspots and lynx trapping data, and the El Niño series of sea surface temperatures. © 2012 American Statistical Association and the American Society for Quality.

  5. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  6. Decompounding random sums: A nonparametric approach

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...

  7. A nonparametric spatial scan statistic for continuous data.

    Science.gov (United States)

    Jung, Inkyung; Cho, Ho Jin

    2015-10-20

    Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.

  8. Risk Prediction Models in Psychiatry: Toward a New Frontier for the Prevention of Mental Illnesses.

    Science.gov (United States)

    Bernardini, Francesco; Attademo, Luigi; Cleary, Sean D; Luther, Charles; Shim, Ruth S; Quartesan, Roberto; Compton, Michael T

    2017-05-01

    We conducted a systematic, qualitative review of risk prediction models designed and tested for depression, bipolar disorder, generalized anxiety disorder, posttraumatic stress disorder, and psychotic disorders. Our aim was to understand the current state of research on risk prediction models for these 5 disorders and thus future directions as our field moves toward embracing prediction and prevention. Systematic searches of the entire MEDLINE electronic database were conducted independently by 2 of the authors (from 1960 through 2013) in July 2014 using defined search criteria. Search terms included risk prediction, predictive model, or prediction model combined with depression, bipolar, manic depressive, generalized anxiety, posttraumatic, PTSD, schizophrenia, or psychosis. We identified 268 articles based on the search terms and 3 criteria: published in English, provided empirical data (as opposed to review articles), and presented results pertaining to developing or validating a risk prediction model in which the outcome was the diagnosis of 1 of the 5 aforementioned mental illnesses. We selected 43 original research reports as a final set of articles to be qualitatively reviewed. The 2 independent reviewers abstracted 3 types of data (sample characteristics, variables included in the model, and reported model statistics) and reached consensus regarding any discrepant abstracted information. Twelve reports described models developed for prediction of major depressive disorder, 1 for bipolar disorder, 2 for generalized anxiety disorder, 4 for posttraumatic stress disorder, and 24 for psychotic disorders. Most studies reported on sensitivity, specificity, positive predictive value, negative predictive value, and area under the (receiver operating characteristic) curve. Recent studies demonstrate the feasibility of developing risk prediction models for psychiatric disorders (especially psychotic disorders). The field must now advance by (1) conducting more large

  9. GalMod: the last frontier of Galaxy population synthesis models

    Science.gov (United States)

    Pasetto, Stefano; Kollmeier, Juna; Grebel, Eva K.; chiosi, cesare

    2018-01-01

    We present a novel Galaxy population synthesis model: GalMod (Pasetto et al. 2016, 2017a,b) is the only star-count model featuring an asymmetric bar/bulge as well as spiral arms as directly obtained by applying linear perturbative theory to self-consistent distribution function of the Galaxy stellar populations. Compared to previous literature models (e.g., Besancon, Trilegal), GalMod allows to generate full-sky mock catalogue, M31 surveys and provides a better match to observed Milky Way (MW) stellar fields.The model can generate synthetic mock catalogs of visible portions of the MW, external galaxies like M31, or N-body simulation initial conditions. At any given time, e.g., a chosen age of the Galaxy, the model contains a sum of discrete stellar populations, namely bulge/bar, disk, halo. The disk population is itself the sum of subpopulations: spiral arms, thin disk, thick disk, and gas component, while the halo is modeled as the sum of a stellar component, a hot coronal gas, and a dark matter component. The Galactic potential is computed from these subpopulations' density profiles and used to generate detailed kinematics by considering the first few moments of the Boltzmann collisionless equation for all the stellar subpopulations. The same density profiles are then used to define the observed color-magnitude diagrams within an input field of view from an arbitrary solar location. Several photometric systems have been included and made available on-line, e.g., SDSS, Gaia, 2MASS, HST WFC3, and others. Finally, we model the extinction with advanced ray tracing solutions.The model's web page (and tutorial) can be accessed at www.GalMod.org.

  10. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  11. A literature review on business process modelling: new frontiers of reusability

    Science.gov (United States)

    Aldin, Laden; de Cesare, Sergio

    2011-08-01

    Business process modelling (BPM) has become fundamental for modern enterprises due to the increasing rate of organisational change. As a consequence, business processes need to be continuously (re-)designed as well as subsequently aligned with the corresponding enterprise information systems. One major problem associated with the design of business processes is reusability. Reuse of business process models has the potential of increasing the efficiency and effectiveness of BPM. This article critically surveys the existing literature on the problem of BPM reusability and more specifically on that State-of-the-Art research that can provide or suggest the 'elements' required for the development of a methodology aimed at discovering reusable conceptual artefacts in the form of patterns. The article initially clarifies the definitions of business process and business process model; then, it sets out to explore the previous research conducted in areas that have an impact on reusability in BPM. The article concludes by distilling directions for future research towards the development of apatterns-based approach to BPM; an approach that brings together the contributions made by the research community in the areas of process mining and discovery, declarative approaches and ontologies.

  12. Modeling the tumor extracellular matrix: Tissue engineering tools repurposed towards new frontiers in cancer biology.

    Science.gov (United States)

    Gill, Bartley J; West, Jennifer L

    2014-06-27

    Cancer progression is mediated by complex epigenetic, protein and structural influences. Critical among them are the biochemical, mechanical and architectural properties of the extracellular matrix (ECM). In recognition of the ECM's important role, cancer biologists have repurposed matrix mimetic culture systems first widely used by tissue engineers as new tools for in vitro study of tumor models. In this review we discuss the pathological changes in tumor ECM, the limitations of 2D culture on both traditional and polyacrylamide hydrogel surfaces in modeling these characteristics and advances in both naturally derived and synthetic scaffolds to facilitate more complex and controllable 3D cancer cell culture. Studies using naturally derived matrix materials like Matrigel and collagen have produced significant findings related to tumor morphogenesis and matrix invasion in a 3D environment and the mechanotransductive signaling that mediates key tumor-matrix interaction. However, lack of precise experimental control over important matrix factors in these matrices have increasingly led investigators to synthetic and semi-synthetic scaffolds that offer the engineering of specific ECM cues and the potential for more advanced experimental manipulations. Synthetic scaffolds composed of poly(ethylene glycol) (PEG), for example, facilitate highly biocompatible 3D culture, modular bioactive features like cell-mediated matrix degradation and complete independent control over matrix bioactivity and mechanics. Future work in PEG or similar reductionist synthetic matrix systems should enable the study of increasingly complex and dynamic tumor-ECM relationships in the hopes that accurate modeling of these relationships may reveal new cancer therapeutics targeting tumor progression and metastasis. © 2013 Published by Elsevier Ltd.

  13. Nonparametric identification of copula structures

    KAUST Repository

    Li, Bo

    2013-06-01

    We propose a unified framework for testing a variety of assumptions commonly made about the structure of copulas, including symmetry, radial symmetry, joint symmetry, associativity and Archimedeanity, and max-stability. Our test is nonparametric and based on the asymptotic distribution of the empirical copula process.We perform simulation experiments to evaluate our test and conclude that our method is reliable and powerful for assessing common assumptions on the structure of copulas, particularly when the sample size is moderately large. We illustrate our testing approach on two datasets. © 2013 American Statistical Association.

  14. On the critical frontiers of Potts ferromagnets

    International Nuclear Information System (INIS)

    Magalhaes, A.C.N. de; Tsallis, C.

    1981-01-01

    A conjecture concerning the critical frontiers of q- state Potts ferromagnets on d- dimensional lattices (d > 1) which generalize a recent one stated for planar lattices is formulated. The present conjecture is verified within satisfactory accuracy (exactly in some cases) for all the lattices or arrays whose critical points are known. Its use leads to the prediction of: a) a considerable amount of new approximate critical points (26 on non-planar regular lattices, some others on Husimi trees and cacti); b) approximate critical frontiers for some 3- dimensional lattices; c) the possibly asymptotically exact critical point on regular lattices in the limit d→infinite for all q>=1; d) the possibly exact critical frontier for the pure Potts model on fully anisotropic Bethe lattices; e) the possibly exact critical frontier for the general quenched random-bond Potts ferromagnet (any P(J)) on isotropic Bethe lattices. (Author) [pt

  15. The Stellar Initial Mass Function in Early-type Galaxies from Absorption Line Spectroscopy. IV. A Super-Salpeter IMF in the Center of NGC 1407 from Non-parametric Models

    Energy Technology Data Exchange (ETDEWEB)

    Conroy, Charlie [Department of Astronomy, Harvard University, Cambridge, MA, 02138 (United States); Van Dokkum, Pieter G. [Department of Astronomy, Yale University, New Haven, CT, 06511 (United States); Villaume, Alexa [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)

    2017-03-10

    It is now well-established that the stellar initial mass function (IMF) can be determined from the absorption line spectra of old stellar systems, and this has been used to measure the IMF and its variation across the early-type galaxy population. Previous work focused on measuring the slope of the IMF over one or more stellar mass intervals, implicitly assuming that this is a good description of the IMF and that the IMF has a universal low-mass cutoff. In this work we consider more flexible IMFs, including two-component power laws with a variable low-mass cutoff and a general non-parametric model. We demonstrate with mock spectra that the detailed shape of the IMF can be accurately recovered as long as the data quality is high (S/N ≳ 300 Å{sup −1}) and cover a wide wavelength range (0.4–1.0 μ m). We apply these flexible IMF models to a high S/N spectrum of the center of the massive elliptical galaxy NGC 1407. Fitting the spectrum with non-parametric IMFs, we find that the IMF in the center shows a continuous rise extending toward the hydrogen-burning limit, with a behavior that is well-approximated by a power law with an index of −2.7. These results provide strong evidence for the existence of extreme (super-Salpeter) IMFs in the cores of massive galaxies.

  16. Nonparametric inference of network structure and dynamics

    Science.gov (United States)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  17. Frontier Scientists use Modern Media

    Science.gov (United States)

    O'connell, E. A.

    2013-12-01

    Engaging Americans and the international community in the excitement and value of Alaskan Arctic discovery is the goal of Frontier Scientists. With a changing climate, resources of polar regions are being eyed by many nations. Frontier Scientists brings the stories of field scientists in the Far North to the public. With a website, an app, short videos, and social media channels; FS is a model for making connections between the public and field scientists. FS will demonstrate how academia, web content, online communities, evaluation and marketing are brought together in a 21st century multi-media platform, how scientists can maintain their integrity while engaging in outreach, and how new forms of media such as short videos can entertain as well as inspire.

  18. Nonparametric statistics for social and behavioral sciences

    CERN Document Server

    Kraska-MIller, M

    2013-01-01

    Introduction to Research in Social and Behavioral SciencesBasic Principles of ResearchPlanning for ResearchTypes of Research Designs Sampling ProceduresValidity and Reliability of Measurement InstrumentsSteps of the Research Process Introduction to Nonparametric StatisticsData AnalysisOverview of Nonparametric Statistics and Parametric Statistics Overview of Parametric Statistics Overview of Nonparametric StatisticsImportance of Nonparametric MethodsMeasurement InstrumentsAnalysis of Data to Determine Association and Agreement Pearson Chi-Square Test of Association and IndependenceContingency

  19. Bayesian nonparametric system reliability using sets of priors

    NARCIS (Netherlands)

    Walter, G.M.; Aslett, L.J.M.; Coolen, F.P.A.

    2016-01-01

    An imprecise Bayesian nonparametric approach to system reliability with multiple types of components is developed. This allows modelling partial or imperfect prior knowledge on component failure distributions in a flexible way through bounds on the functioning probability. Given component level test

  20. Contruction of a smoothed DEA frontier

    Directory of Open Access Journals (Sweden)

    Mello João Carlos Correia Baptista Soares de

    2002-01-01

    Full Text Available It is known that the DEA multipliers model does not allow a unique solution for the weights. This is due to the absence of unique derivatives in the extreme-efficient points, which is a consequence of the piecewise linear nature of the frontier. In this paper we propose a method to solve this problem, consisting of changing the original DEA frontier for a new one, smooth (with continuous derivatives at every point and closest to the original frontier. We present the theoretical development for the general case, exemplified with the particular case of the BCC model with one input and one output. The 3-dimensional problem is briefly discussed. Some uses of the model are summarised, and one of them, a new Cross-Evaluation model, is presented.

  1. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    Science.gov (United States)

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  2. Efficient Provision of Employment Service Outputs: A Production Frontier Analysis.

    Science.gov (United States)

    Cavin, Edward S.; Stafford, Frank P.

    1985-01-01

    This article develops a production frontier model for the Employment Service and assesses the relative efficiency of the 51 State Employment Security Agencies in attaining program outcomes close to that frontier. This approach stands in contrast to such established practices as comparing programs to their own previous performance. (Author/CT)

  3. On the Endogeneity of the Mean-Variance Efficient Frontier.

    Science.gov (United States)

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  4. Non-parametric estimation of the individual's utility map

    OpenAIRE

    Noguchi, Takao; Sanborn, Adam N.; Stewart, Neil

    2013-01-01

    Models of risky choice have attracted much attention in behavioural economics. Previous research has repeatedly demonstrated that individuals' choices are not well explained by expected utility theory, and a number of alternative models have been examined using carefully selected sets of choice alternatives. The model performance however, can depend on which choice alternatives are being tested. Here we develop a non-parametric method for estimating the utility map over the wide range of choi...

  5. Frontiers in fusion research

    CERN Document Server

    Kikuchi, Mitsuru

    2011-01-01

    Frontiers in Fusion Research provides a systematic overview of the latest physical principles of fusion and plasma confinement. It is primarily devoted to the principle of magnetic plasma confinement, that has been systematized through 50 years of fusion research. Frontiers in Fusion Research begins with an introduction to the study of plasma, discussing the astronomical birth of hydrogen energy and the beginnings of human attempts to harness the Sun's energy for use on Earth. It moves on to chapters that cover a variety of topics such as: * charged particle motion, * plasma kinetic theory, *

  6. At the Frontiers of Modeling Intensive Longitudinal Data: Dynamic Structural Equation Models for the Affective Measurements from the COGITO Study.

    Science.gov (United States)

    Hamaker, E L; Asparouhov, T; Brose, A; Schmiedek, F; Muthén, B

    2018-04-06

    With the growing popularity of intensive longitudinal research, the modeling techniques and software options for such data are also expanding rapidly. Here we use dynamic multilevel modeling, as it is incorporated in the new dynamic structural equation modeling (DSEM) toolbox in Mplus, to analyze the affective data from the COGITO study. These data consist of two samples of over 100 individuals each who were measured for about 100 days. We use composite scores of positive and negative affect and apply a multilevel vector autoregressive model to allow for individual differences in means, autoregressions, and cross-lagged effects. Then we extend the model to include random residual variances and covariance, and finally we investigate whether prior depression affects later depression scores through the random effects of the daily diary measures. We end with discussing several urgent-but mostly unresolved-issues in the area of dynamic multilevel modeling.

  7. The Climate Adaptation Frontier

    Energy Technology Data Exchange (ETDEWEB)

    Preston, Benjamin L [ORNL

    2013-01-01

    Climate adaptation has emerged as a mainstream risk management strategy for assisting in maintaining socio-ecological systems within the boundaries of a safe operating space. Yet, there are limits to the ability of systems to adapt. Here, we introduce the concept of an adaptation frontier , which is defined as a socio-ecological system s transitional adaptive operating space between safe and unsafe domains. A number of driving forces are responsible for determining the sustainability of systems on the frontier. These include path dependence, adaptation/development deficits, values conflicts and discounting of future loss and damage. The cumulative implications of these driving forces are highly uncertain. Nevertheless, the fact that a broad range of systems already persist at the edge of their frontiers suggests a high likelihood that some limits will eventually be exceeded. The resulting system transformation is likely to manifest as anticipatory modification of management objectives or loss and damage. These outcomes vary significantly with respect to their ethical implications. Successful navigation of the adaptation frontier will necessitate new paradigms of risk governance to elicit knowledge that encourages reflexive reevaluation of societal values that enable or constrain sustainability.

  8. The Final Frontier

    DEFF Research Database (Denmark)

    Baron, Christian

    2017-01-01

    in living conditions, where neglect or reckless behavior may have fatal consequences. Exploring the consequences of such behavior in Tom Godwin’s short story ‘The Cold Equations’ (1954) as well as Ridley Scott’s film, Alien (1979), it argues that such ‘frontier situations’ warrant a change in the general...

  9. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  10. Nonparametric Regression Estimation for Multivariate Null Recurrent Processes

    Directory of Open Access Journals (Sweden)

    Biqing Cai

    2015-04-01

    Full Text Available This paper discusses nonparametric kernel regression with the regressor being a \\(d\\-dimensional \\(\\beta\\-null recurrent process in presence of conditional heteroscedasticity. We show that the mean function estimator is consistent with convergence rate \\(\\sqrt{n(Th^{d}}\\, where \\(n(T\\ is the number of regenerations for a \\(\\beta\\-null recurrent process and the limiting distribution (with proper normalization is normal. Furthermore, we show that the two-step estimator for the volatility function is consistent. The finite sample performance of the estimate is quite reasonable when the leave-one-out cross validation method is used for bandwidth selection. We apply the proposed method to study the relationship of Federal funds rate with 3-month and 5-year T-bill rates and discover the existence of nonlinearity of the relationship. Furthermore, the in-sample and out-of-sample performance of the nonparametric model is far better than the linear model.

  11. Nonparametric instrumental regression with non-convex constraints

    International Nuclear Information System (INIS)

    Grasmair, M; Scherzer, O; Vanhems, A

    2013-01-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)

  12. Nonparametric instrumental regression with non-convex constraints

    Science.gov (United States)

    Grasmair, M.; Scherzer, O.; Vanhems, A.

    2013-03-01

    This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.

  13. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  14. Nonparametric Efficiency Testing of Asian Stock Markets Using Weekly Data

    OpenAIRE

    CORNELIS A. LOS

    2004-01-01

    The efficiency of speculative markets, as represented by Fama's 1970 fair game model, is tested on weekly price index data of six Asian stock markets - Hong Kong, Indonesia, Malaysia, Singapore, Taiwan and Thailand - using Sherry's (1992) non-parametric methods. These scientific testing methods were originally developed to analyze the information processing efficiency of nervous systems. In particular, the stationarity and independence of the price innovations are tested over ten years, from ...

  15. Screen Wars, Star Wars, and Sequels: Nonparametric Reanalysis of Movie Profitability

    OpenAIRE

    W. D. Walls

    2012-01-01

    In this paper we use nonparametric statistical tools to quantify motion-picture profit. We quantify the unconditional distribution of profit, the distribution of profit conditional on stars and sequels, and we also model the conditional expectation of movie profits using a non- parametric data-driven regression model. The flexibility of the non-parametric approach accommodates the full range of possible relationships among the variables without prior specification of a functional form, thereb...

  16. Frontiers in Superconducting Materials

    CERN Document Server

    Narlikar, Anant V

    2005-01-01

    Frontiers in Superconducting Materials gives a state-of-the-art report of the most important topics of the current research in superconductive materials and related phenomena. It comprises 30 chapters written by renowned international experts in the field. It is of central interest to researchers and specialists in Physics and Materials Science, both in academic and industrial research, as well as advanced students. It also addresses electronic and electrical engineers. Even non-specialists interested in superconductivity might find some useful answers.

  17. PARTICLE BEAMS: Frontier course

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Driven by the quest for higher energies and optimal physics conditions, the behaviour of particle beams in accelerators and storage rings is the subject of increasing attention. Thus the second course organized jointly by the US and CERN Accelerator Schools looked towards the frontiers of particle beam knowledge. The programme held at South Padre Island, Texas, from 23-29 October attracted 125 participants including some 35 from Europe

  18. PARTICLE BEAMS: Frontier course

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-01-15

    Driven by the quest for higher energies and optimal physics conditions, the behaviour of particle beams in accelerators and storage rings is the subject of increasing attention. Thus the second course organized jointly by the US and CERN Accelerator Schools looked towards the frontiers of particle beam knowledge. The programme held at South Padre Island, Texas, from 23-29 October attracted 125 participants including some 35 from Europe.

  19. About pioneer frontiers

    Directory of Open Access Journals (Sweden)

    Hervé Théry

    2014-09-01

    Full Text Available The geographer Pierre Monbeig wrote texts far ahead of his time, who deserve to be read today as they are useful in understanding today's pioneering frontiers. These are nowadays much further north than in his time, in the Amazon, contested between advocates of environmental protection and production of meat and grains, which has appeared on the southern ,lank of Brazilian Amazon, in Mato Grosso.

  20. Frontiers in Gold Chemistry

    OpenAIRE

    Ahmed A. Mohamed

    2015-01-01

    Basic chemistry of gold tells us that it can bond to sulfur, phosphorous, nitrogen, and oxygen donor ligands. The Frontiers in Gold Chemistry Special Issue covers gold complexes bonded to the different donors and their fascinating applications. This issue covers both basic chemistry studies of gold complexes and their contemporary applications in medicine, materials chemistry, and optical sensors. There is a strong belief that aurophilicity plays a major role in the unending applications of g...

  1. Prairie, gold and frontier

    International Nuclear Information System (INIS)

    Chirico, S.

    2005-01-01

    ThIs work deals with the mining history of the region of Cunapiru, Uruguay. Its process develops inside a rural world, and in this aspect it is not very different to other praires of similar geographic zones.Nevertheless, the fact of being a frontier territory makes it singular, different, and peculiar enough to transform this praire deeply. Memories of prosperity times nurture a centenarian illusion of manfified, inexact dating or significance facts. However, all that memories were essentials to collective identify.

  2. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    Directory of Open Access Journals (Sweden)

    Zhanchao Li

    2013-01-01

    Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.

  3. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    Science.gov (United States)

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…

  4. Single versus mixture Weibull distributions for nonparametric satellite reliability

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Long recognized as a critical design attribute for space systems, satellite reliability has not yet received the proper attention as limited on-orbit failure data and statistical analyses can be found in the technical literature. To fill this gap, we recently conducted a nonparametric analysis of satellite reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we provide an advanced parametric fit, based on mixture of Weibull distributions, and compare it with the single Weibull distribution model obtained with the Maximum Likelihood Estimation (MLE) method. We demonstrate that both parametric fits are good approximations of the nonparametric satellite reliability, but that the mixture Weibull distribution provides significant accuracy in capturing all the failure trends in the failure data, as evidenced by the analysis of the residuals and their quasi-normal dispersion.

  5. Where is the Efficient Frontier

    OpenAIRE

    Jing Chen

    2010-01-01

    Tremendous effort has been spent on the construction of reliable efficient frontiers. However, mean-variance efficient portfolios constructed using sample means and covariance often perform poorly out of sample. We prove that, the capital market line is the efficient frontier for the risky assets in a financial market with liquid fixed income trading. This unified understanding of riskless asset as the boundary of risky assets relieves the burden of constructing efficient frontiers in asset a...

  6. Case studies, cross-site comparisons, and the challenge of generalization: comparing agent-based models of land-use change in frontier regions.

    Science.gov (United States)

    Parker, Dawn C; Entwisle, Barbara; Rindfuss, Ronald R; Vanwey, Leah K; Manson, Steven M; Moran, Emilio; An, Li; Deadman, Peter; Evans, Tom P; Linderman, Marc; Rizi, S Mohammad Mussavi; Malanson, George

    2008-01-01

    Cross-site comparisons of case studies have been identified as an important priority by the land-use science community. From an empirical perspective, such comparisons potentially allow generalizations that may contribute to production of global-scale land-use and land-cover change projections. From a theoretical perspective, such comparisons can inform development of a theory of land-use science by identifying potential hypotheses and supporting or refuting evidence. This paper undertakes a structured comparison of four case studies of land-use change in frontier regions that follow an agent-based modeling approach. Our hypothesis is that each case study represents a particular manifestation of a common process. Given differences in initial conditions among sites and the time at which the process is observed, actual mechanisms and outcomes are anticipated to differ substantially between sites. Our goal is to reveal both commonalities and differences among research sites, model implementations, and ultimately, conclusions derived from the modeling process.

  7. Recent Advances and Trends in Nonparametric Statistics

    CERN Document Server

    Akritas, MG

    2003-01-01

    The advent of high-speed, affordable computers in the last two decades has given a new boost to the nonparametric way of thinking. Classical nonparametric procedures, such as function smoothing, suddenly lost their abstract flavour as they became practically implementable. In addition, many previously unthinkable possibilities became mainstream; prime examples include the bootstrap and resampling methods, wavelets and nonlinear smoothers, graphical methods, data mining, bioinformatics, as well as the more recent algorithmic approaches such as bagging and boosting. This volume is a collection o

  8. Examples of the Application of Nonparametric Information Geometry to Statistical Physics

    Directory of Open Access Journals (Sweden)

    Giovanni Pistone

    2013-09-01

    Full Text Available We review a nonparametric version of Amari’s information geometry in which the set of positive probability densities on a given sample space is endowed with an atlas of charts to form a differentiable manifold modeled on Orlicz Banach spaces. This nonparametric setting is used to discuss the setting of typical problems in machine learning and statistical physics, such as black-box optimization, Kullback-Leibler divergence, Boltzmann-Gibbs entropy and the Boltzmann equation.

  9. Developing an immigration policy for Germany on the basis of a nonparametric labor market classification

    OpenAIRE

    Froelich, Markus; Puhani, Patrick

    2004-01-01

    Based on a nonparametrically estimated model of labor market classifications, this paper makes suggestions for immigration policy using data from western Germany in the 1990s. It is demonstrated that nonparametric regression is feasible in higher dimensions with only a few thousand observations. In sum, labor markets able to absorb immigrants are characterized by above average age and by professional occupations. On the other hand, labor markets for young workers in service occupations are id...

  10. Bayesian inference in an extended SEIR model with nonparametric disease transmission rate: an application to the Ebola epidemic in Sierra Leone.

    Science.gov (United States)

    Frasso, Gianluca; Lambert, Philippe

    2016-10-01

    SummaryThe 2014 Ebola outbreak in Sierra Leone is analyzed using a susceptible-exposed-infectious-removed (SEIR) epidemic compartmental model. The discrete time-stochastic model for the epidemic evolution is coupled to a set of ordinary differential equations describing the dynamics of the expected proportions of subjects in each epidemic state. The unknown parameters are estimated in a Bayesian framework by combining data on the number of new (laboratory confirmed) Ebola cases reported by the Ministry of Health and prior distributions for the transition rates elicited using information collected by the WHO during the follow-up of specific Ebola cases. The time-varying disease transmission rate is modeled in a flexible way using penalized B-splines. Our framework represents a valuable stochastic tool for the study of an epidemic dynamic even when only irregularly observed and possibly aggregated data are available. Simulations and the analysis of the 2014 Sierra Leone Ebola data highlight the merits of the proposed methodology. In particular, the flexible modeling of the disease transmission rate makes the estimation of the effective reproduction number robust to the misspecification of the initial epidemic states and to underreporting of the infectious cases. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. A Bayesian nonparametric approach to causal inference on quantiles.

    Science.gov (United States)

    Xu, Dandan; Daniels, Michael J; Winterstein, Almut G

    2018-02-25

    We propose a Bayesian nonparametric approach (BNP) for causal inference on quantiles in the presence of many confounders. In particular, we define relevant causal quantities and specify BNP models to avoid bias from restrictive parametric assumptions. We first use Bayesian additive regression trees (BART) to model the propensity score and then construct the distribution of potential outcomes given the propensity score using a Dirichlet process mixture (DPM) of normals model. We thoroughly evaluate the operating characteristics of our approach and compare it to Bayesian and frequentist competitors. We use our approach to answer an important clinical question involving acute kidney injury using electronic health records. © 2018, The International Biometric Society.

  12. Frontiers in Computer Education

    CERN Document Server

    Zhu, Egui; 2011 International Conference on Frontiers in Computer Education (ICFCE 2011)

    2012-01-01

    This book is the proceedings of the 2011 International Conference on Frontiers in Computer Education (ICFCE 2011) in Sanya, China, December 1-2, 2011. The contributions can be useful for researchers, software engineers, and programmers, all interested in promoting the computer and education development. Topics covered are computing and communication technology, network management, wireless networks, telecommunication, Signal and Image Processing, Machine Learning, educational management, educational psychology, educational system, education engineering, education technology and training.  The emphasis is on methods and calculi for computer science and education technology development, verification and verification tools support, experiences from doing developments, and the associated theoretical problems.

  13. Frontiers in nuclear chemistry

    International Nuclear Information System (INIS)

    Sood, D.D.; Reddy, A.V.R.; Pujari, P.K.

    1996-01-01

    This book contains articles on the landmarks in nuclear and radiochemistry which takes through scientific history spanning over five decades from the times of Roentgen to the middle of this century. Articles on nuclear fission and back end of the nuclear fuel cycle give an insight into the current status of this subject. Reviews on frontier areas like lanthanides, actinides, muonium chemistry, accelerator based nuclear chemistry, fast radiochemical separations and nuclear medicine bring out the multidisciplinary nature of nuclear sciences. This book also includes an article on environmental radiochemistry and safety. Chapters relevant to INIS are indexed separately

  14. Frontiers in Magnetic Materials

    CERN Document Server

    Narlikar, Anant V

    2005-01-01

    Frontiers in Magnetic Materials focuses on the current achievements and state-of-the-art advancements in magnetic materials. Several lines of development- High-Tc Superconductivity, Nanotechnology and refined experimental techniques among them – raised knowledge and interest in magnetic materials remarkably. The book comprises 24 chapters on the most relevant topics written by renowned international experts in the field. It is of central interest to researchers and specialists in Physics and Materials Science, both in academic and industrial research, as well as advanced students.

  15. Teaching Nonparametric Statistics Using Student Instrumental Values.

    Science.gov (United States)

    Anderson, Jonathan W.; Diddams, Margaret

    Nonparametric statistics are often difficult to teach in introduction to statistics courses because of the lack of real-world examples. This study demonstrated how teachers can use differences in the rankings and ratings of undergraduate and graduate values to discuss: (1) ipsative and normative scaling; (2) uses of the Mann-Whitney U-test; and…

  16. Nonparametric conditional predictive regions for time series

    NARCIS (Netherlands)

    de Gooijer, J.G.; Zerom Godefay, D.

    2000-01-01

    Several nonparametric predictors based on the Nadaraya-Watson kernel regression estimator have been proposed in the literature. They include the conditional mean, the conditional median, and the conditional mode. In this paper, we consider three types of predictive regions for these predictors — the

  17. Nonparametric predictive inference in statistical process control

    NARCIS (Netherlands)

    Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.

    2000-01-01

    New methods for statistical process control are presented, where the inferences have a nonparametric predictive nature. We consider several problems in process control in terms of uncertainties about future observable random quantities, and we develop inferences for these random quantities hased on

  18. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  19. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.; Lombard, F.

    2012-01-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

  20. Multi-scale constraints of sediment source to sink systems in frontier basins: a forward stratigraphic modeling case study of the Levant region

    Science.gov (United States)

    Hawie, Nicolas; Deschamps, Remy; Granjeon, Didier; Nader, Fadi-Henri; Gorini, Christian; Müller, Carla; Montadert, Lucien; Baudin, François

    2015-04-01

    Recent scientific work underlined the presence of a thick Cenozoic infill in the Levant Basin reaching up to 12 km. Interestingly; restricted sedimentation was observed along the Levant margin in the Cenozoic. Since the Late Eocene successive regional geodynamic events affecting Afro-Arabia and Eurasia (collision and strike slip deformation)induced fast marginal uplifts. The initiation of local and long-lived regional drainage systems in the Oligo-Miocene period (e.g. Lebanon versus Nile) provoked a change in the depositional pattern along the Levant margin and basin. A shift from carbonate dominated environments into clastic rich systems has been observed. Through this communication we explore the importance of multi-scale constraints (i.e.,seismic, well and field data) in the quantification of the subsidence history, sediment transport and deposition of a Middle-Upper Miocene "multi-source" to sink system along the northernLevant frontier region. We prove through a comprehensive forward stratigraphic modeling workflow that the contribution to the infill of the northern Levant Basin (offshore Lebanon) is split in between proximal and more distal clastic sources as well as in situ carbonate/hemipelagic deposition. In a wider perspective this work falls under the umbrella of multi-disciplinary source to sink studies that investigate the impact of geodynamic events on basin/margin architectural evolutions, consequent sedimentary infill and thus on petroleum systems assessment.

  1. Introducing "Frontiers in Zoology"

    Science.gov (United States)

    Heinze, Jürgen; Tautz, Diethard

    2004-09-29

    As a biological discipline, zoology has one of the longest histories. Today it occasionally appears as though, due to the rapid expansion of life sciences, zoology has been replaced by more or less independent sub-disciplines amongst which exchange is often sparse. However, the recent advance of molecular methodology into "classical" fields of biology, and the development of theories that can explain phenomena on different levels of organisation, has led to a re-integration of zoological disciplines promoting a broader than usual approach to zoological questions. Zoology has re-emerged as an integrative discipline encompassing the most diverse aspects of animal life, from the level of the gene to the level of the ecosystem.The new journal Frontiers in Zoology is the first Open Access journal focussing on zoology as a whole. It aims to represent and re-unite the various disciplines that look at animal life from different perspectives and at providing the basis for a comprehensive understanding of zoological phenomena on all levels of analysis. Frontiers in Zoology provides a unique opportunity to publish high quality research and reviews on zoological issues that will be internationally accessible to any reader at no cost.

  2. Karakteristik Kurva Efisien Frontier dalam Menentukan Portofolio Optimal

    Directory of Open Access Journals (Sweden)

    epha diana supandi

    2016-06-01

    Full Text Available Pada tulisan ini karakteristik kurva efisien frontier pada model portofolio Markowitz diteliti secara matematis. Portofolio optimal diperoleh dengan menggunakan metode Lagrange. Pada penelitian ini juga dikaji karakteristik portofolio optimal pada portofolio minimum variance, portofolio tangency dan portofolio mean-variance serta posisinya pada kurva efisien frontier. Lebih lanjut untuk memberikan gambaran yang lebih konkrit maka diberikan contoh numerik pada beberapa saham yang diperdagangkan di pasar modal Indonesia.

  3. The complexity of earth observation valuation: Modeling the patterns and processes of agricultural production and groundwater quality to construct a production possibilities frontier

    Science.gov (United States)

    Forney, W.; Raunikar, R. P.; Bernknopf, R.; Mishra, S.

    2012-12-01

    A production possibilities frontier (PPF) is a graph comparing the production interdependencies for two commodities. In this case, the commodities are defined as the ecosystem services of agricultural production and groundwater quality. This presentation focuses on the refinement of techniques used in an application to estimate the value of remote sensing information. Value of information focuses on the use of uncertain and varying qualities of information within a specific decision-making context for a certain application, which in this case included land use, biogeochemical, hydrogeologic, economic and geospatial data and models. The refined techniques include deriving alternate patterns and processes of ecosystem functions, new estimates of ecosystem service values to construct a PPF, and the extension of this work into decision support systems. We have coupled earth observations of agricultural production with groundwater quality measurements to estimate the value of remote sensing information in northeastern Iowa to be 857M ± 198M (at the 2010 price level) per year. We will present an improved method for modeling crop rotation patterns to include multiple years of rotation, reduction in the assumptions associated with optimal land use allocations, and prioritized improvement of the resolution of input data (for example, soil resources and topography). The prioritization focuses on watersheds that were identified at a coarse-scale of analysis to have higher intensities of agricultural production and lower probabilities of groundwater survivability (in other words, remaining below a regulatory threshold for nitrate pollution) over time, and thus require finer-scaled modeling and analysis. These improved techniques and the simulation of certain scale-dependent policy and management actions, which trade-off the objectives of optimizing crop value versus maintaining potable groundwater, and provide new estimates for the empirical values of the PPF. The calculation

  4. New frontiers for tomorrow's world

    International Nuclear Information System (INIS)

    Kassler, P.

    1994-01-01

    The conference paper deals with new frontiers and barricades in the global economic development and their influence on fuel consumption and energy source development. Topics discussed are incremental energy supply - new frontiers, world car population - new frontiers, OPEC crude production capacity vs call on OPEC, incremental world oil demand by region 1992-2000, oil resource cost curve, progress in seismic 1983-1991, Troll picture, cost reduction in renewables, sustained growth scenario, nuclear electricity capacity - France, OECD road transport fuels - barricades, and energy taxation. 18 figs

  5. Targeting of Gold Deposits in Amazonian Exploration Frontiers using Knowledge- and Data-Driven Spatial Modeling of Geophysical, Geochemical, and Geological Data

    Science.gov (United States)

    Magalhães, Lucíola Alves; Souza Filho, Carlos Roberto

    2012-03-01

    This paper reports the application of weights-of-evidence, artificial neural networks, and fuzzy logic spatial modeling techniques to generate prospectivity maps for gold mineralization in the neighborhood of the Amapari Au mine, Brazil. The study area comprises one of the last Brazilian mineral exploration frontiers. The Amapari mine is located in the Maroni-Itaicaiúnas Province, which regionally hosts important gold, iron, manganese, chromite, diamond, bauxite, kaolinite, and cassiterite deposits. The Amapari Au mine is characterized as of the orogenic gold deposit type. The highest gold grades are associated with highly deformed rocks and are concentrated in sulfide-rich veins mainly composed of pyrrhotite. The data used for the generation of gold prospectivity models include aerogeophysical and geological maps as well as the gold content of stream sediment samples. The prospectivity maps provided by these three methods showed that the Amapari mine stands out as an area of high potential for gold mineralization. The prospectivity maps also highlight new targets for gold exploration. These new targets were validated by means of detailed maps of gold geochemical anomalies in soil and by fieldwork. The identified target areas exhibit good spatial coincidence with the main soil geochemical anomalies and prospects, thus demonstrating that the delineation of exploration targets by analysis and integration of indirect datasets in a geographic information system (GIS) is consistent with direct prospecting. Considering that work of this nature has never been developed in the Amazonian region, this is an important example of the applicability and functionality of geophysical data and prospectivity analysis in regions where geologic and metallogenetic information is scarce.

  6. Panel data nonparametric estimation of production risk and risk preferences

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    approaches for obtaining firm-specific measures of risk attitudes. We found that Polish dairy farmers are risk averse regarding production risk and price uncertainty. According to our results, Polish dairy farmers perceive the production risk as being more significant than the risk related to output price......We apply nonparametric panel data kernel regression to investigate production risk, out-put price uncertainty, and risk attitudes of Polish dairy farms based on a firm-level unbalanced panel data set that covers the period 2004–2010. We compare different model specifications and different...

  7. Digital spectral analysis parametric, non-parametric and advanced methods

    CERN Document Server

    Castanié, Francis

    2013-01-01

    Digital Spectral Analysis provides a single source that offers complete coverage of the spectral analysis domain. This self-contained work includes details on advanced topics that are usually presented in scattered sources throughout the literature.The theoretical principles necessary for the understanding of spectral analysis are discussed in the first four chapters: fundamentals, digital signal processing, estimation in spectral analysis, and time-series models.An entire chapter is devoted to the non-parametric methods most widely used in industry.High resolution methods a

  8. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  9. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    Science.gov (United States)

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  10. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    International Nuclear Information System (INIS)

    Barat, Eric; Dautremer, Thomas

    2007-01-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on R k (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM

  11. Ghana's cocoa frontier in transition

    DEFF Research Database (Denmark)

    Knudsen, Michael Helt; Agergaard, Jytte

    2015-01-01

    Since the first commercial planting of cocoa in Ghana more than a century ago, the production of cocoa has been a key factor in the redistribution of migrants and has played a pivotal role in the development of both sending and receiving communities. This process has been acknowledged...... Region, this article aims to examine how immigration and frontier dynamics in the Western region are contributing to livelihood transitions and small town development, and how this process is gradually becoming delinked from the production of cocoa. The article focuses on how migration dynamics interlink...... in the literature for decades. However, how migration flows have changed in response to changing livelihoods dynamics of the frontier and how this has impacted on the development of the frontier has only attracted limited attention. Based on a study of immigration to Ghana's current cocoa frontier in the Western...

  12. Particle Physics at the Cosmic, Intensity, and Energy Frontiers

    Energy Technology Data Exchange (ETDEWEB)

    Essig, Rouven

    2018-04-06

    Major efforts at the Intensity, Cosmic, and Energy frontiers of particle physics are rapidly furthering our understanding of the fundamental constituents of Nature and their interactions. The overall objectives of this research project are (1) to interpret and develop the theoretical implications of the data collected at these frontiers and (2) to provide the theoretical motivation, basis, and ideas for new experiments and for new analyses of experimental data. Within the Intensity Frontier, an experimental search for a new force mediated by a GeV-scale gauge boson will be carried out with the $A'$ Experiment (APEX) and the Heavy Photon Search (HPS), both at Jefferson Laboratory. Within the Cosmic Frontier, contributions are planned to the search for dark matter particles with the Fermi Gamma-ray Space Telescope and other instruments. A detailed exploration will also be performed of new direct detection strategies for dark matter particles with sub-GeV masses to facilitate the development of new experiments. In addition, the theoretical implications of existing and future dark matter-related anomalies will be examined. Within the Energy Frontier, the implications of the data from the Large Hadron Collider will be investigated. Novel search strategies will be developed to aid the search for new phenomena not described by the Standard Model of particle physics. By combining insights from all three particle physics frontiers, this research aims to increase our understanding of fundamental particle physics.

  13. Introduction to nonparametric statistics for the biological sciences using R

    CERN Document Server

    MacFarland, Thomas W

    2016-01-01

    This book contains a rich set of tools for nonparametric analyses, and the purpose of this supplemental text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses a...

  14. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    Science.gov (United States)

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  15. A non-parametric framework for estimating threshold limit values

    Directory of Open Access Journals (Sweden)

    Ulm Kurt

    2005-11-01

    Full Text Available Abstract Background To estimate a threshold limit value for a compound known to have harmful health effects, an 'elbow' threshold model is usually applied. We are interested on non-parametric flexible alternatives. Methods We describe how a step function model fitted by isotonic regression can be used to estimate threshold limit values. This method returns a set of candidate locations, and we discuss two algorithms to select the threshold among them: the reduced isotonic regression and an algorithm considering the closed family of hypotheses. We assess the performance of these two alternative approaches under different scenarios in a simulation study. We illustrate the framework by analysing the data from a study conducted by the German Research Foundation aiming to set a threshold limit value in the exposure to total dust at workplace, as a causal agent for developing chronic bronchitis. Results In the paper we demonstrate the use and the properties of the proposed methodology along with the results from an application. The method appears to detect the threshold with satisfactory success. However, its performance can be compromised by the low power to reject the constant risk assumption when the true dose-response relationship is weak. Conclusion The estimation of thresholds based on isotonic framework is conceptually simple and sufficiently powerful. Given that in threshold value estimation context there is not a gold standard method, the proposed model provides a useful non-parametric alternative to the standard approaches and can corroborate or challenge their findings.

  16. A Study on Neutrosophic Frontier and Neutrosophic Semi-frontier in Neutrosophic Topological Spaces

    Directory of Open Access Journals (Sweden)

    P. Iswarya

    2017-02-01

    Full Text Available In this paper neutrosophic frontier and neutrosophic semi-frontier in neutrosophic topology are introduced and several of their properties, characterizations and examples are established.

  17. The Canadian experience in frontier environmental protection

    International Nuclear Information System (INIS)

    Jones, G.H.

    1991-01-01

    Early Canadian frontier exploration (from 1955 onshore and from 1966 for offshore drilling) caused insignificant public concern. The 1967-1968 Torrey Canyon Tanker and Santa Barbara disasters roused public opinion and governments. In Canada, 1969-1970 Arctic gas blowouts, a tanker disaster, and damage to the 'Manhattan' exacerbated concerns and resulted in new environmental regulatory constraints. From 1970, the Arctic Petroleum Operations Association learned to operate safely with environmental responsibility. It studied physical environment for design criteria, and the biological and human environment to ameliorate impact. APOA's research projects covered sea-ice, permafrost, sea-bottom, oil-spills, bird and mammal migration, fish habitat, food chains, oceanography, meteorology, hunters'/trappers' harvests, etc. In 1971 Eastcoast Petroleum Operators' Association and Alaska Oil and Gas Association followed APOA's cooperative research model. EPOA stressed icebergs and fisheries. Certain research was handled by the Canadian Offshore Oil Spill Research Association. By the mid-1980s these associations had undertaken $70,000,000 of environmental oriented research, with equivalent additional work by member companies on specific needs and similar sums by Federal agencies often working with industry on complementary research. The frontier associations then merged with the Canadian Petroleum Association, already active environmentally in western Canada. Working with government and informing environmental interest groups, the public, natives, and local groups, most Canadian frontier petroleum operations proceeded with minimal delay and environmental disturbance

  18. Security in the CernVM File System and the Frontier Distributed Database Caching System

    International Nuclear Information System (INIS)

    Dykstra, D; Blomer, J

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  19. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Science.gov (United States)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  20. Frontiers in Chemical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bowlan, Pamela Renee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-02

    These are slides dealing with frontiers in chemical physics. The following topics are covered: Time resolving chemistry with ultrashort pulses in the 0.1-40 THz spectral range; Example: Mid-infrared absorption spectrum of the intermediate state CH2OO; Tracking reaction dynamics through changes in the spectra; Single-shot measurement of the mid-IR absorption dynamics; Applying 2D coherent mid-IR spectroscopy to learn more about transition states; Time resolving chemical reactions at a catalysis using mid-IR and THz pulses; Studying topological insulators requires a surface sensitive probe; Nonlinear phonon dynamics in Bi2Se3; THz-pump, SHG-probe as a surface sensitive coherent 2D spectroscopy; Nanometer and femtosecond spatiotemporal resolution mid-IR spectroscopy; Coherent two-dimensional THz/mid-IR spectroscopy with 10nm spatial resolution; Pervoskite oxides as catalysts; Functionalized graphene for catalysis; Single-shot spatiotemporal measurements; Spatiotemporal pulse measurement; Intense, broad-band THz/mid-IR generation with organic crystals.

  1. On unlimited frontiers

    International Nuclear Information System (INIS)

    Graham, J.

    1985-01-01

    The political system in the United States in unique because our forefathers planned it that way. Unfortunately, this uniqueness does not always serve the interest of those who advocate large nuclear-power facilities and fuel-cycle activities involving reprocessing and breeder reactors. The influence of various political systems on the viability of the nuclear-power option is discussed. As it faces the future, the U.S. nuclear community is divided over its most appropriate courses of action. The current administration is supportive in words if not in deeds, but it is ideologically opposed to advocating the conditions needed for a thriving nuclear industry based on large light water reactors. Will the U.S. enter the new century with ''unlimited frontiers'' in many new nuclear plant designs, or will some major shift in public opinion bring back political conditions that are more compatible with large LWR facilities. This is the $64,000 question confronting the nuclear industry, which must be prepared for any eventuality. The best chance for the U.S. to regain worldwide superiority in nuclear power-technology may lie in our ability to make rapid adjustments and to offer new and advanced machines that best fit utility needs and the political conditions of their own time

  2. Non-parametric smoothing of experimental data

    International Nuclear Information System (INIS)

    Kuketayev, A.T.; Pen'kov, F.M.

    2007-01-01

    Full text: Rapid processing of experimental data samples in nuclear physics often requires differentiation in order to find extrema. Therefore, even at the preliminary stage of data analysis, a range of noise reduction methods are used to smooth experimental data. There are many non-parametric smoothing techniques: interval averages, moving averages, exponential smoothing, etc. Nevertheless, it is more common to use a priori information about the behavior of the experimental curve in order to construct smoothing schemes based on the least squares techniques. The latter methodology's advantage is that the area under the curve can be preserved, which is equivalent to conservation of total speed of counting. The disadvantages of this approach include the lack of a priori information. For example, very often the sums of undifferentiated (by a detector) peaks are replaced with one peak during the processing of data, introducing uncontrolled errors in the determination of the physical quantities. The problem is solvable only by having experienced personnel, whose skills are much greater than the challenge. We propose a set of non-parametric techniques, which allows the use of any additional information on the nature of experimental dependence. The method is based on a construction of a functional, which includes both experimental data and a priori information. Minimum of this functional is reached on a non-parametric smoothed curve. Euler (Lagrange) differential equations are constructed for these curves; then their solutions are obtained analytically or numerically. The proposed approach allows for automated processing of nuclear physics data, eliminating the need for highly skilled laboratory personnel. Pursuant to the proposed approach is the possibility to obtain smoothing curves in a given confidence interval, e.g. according to the χ 2 distribution. This approach is applicable when constructing smooth solutions of ill-posed problems, in particular when solving

  3. New frontiers in information and production systems modelling and analysis incentive mechanisms, competence management, knowledge-based production

    CERN Document Server

    Novikov, Dmitry; Bakhtadze, Natalia; Zaikin, Oleg

    2016-01-01

    This book demonstrates how to apply modern approaches to complex system control in practical applications involving knowledge-based systems. The dimensions of knowledge-based systems are extended by incorporating new perspectives from control theory, multimodal systems and simulation methods.  The book is divided into three parts: theory, production system and information system applications. One of its main focuses is on an agent-based approach to complex system analysis. Moreover, specialised forms of knowledge-based systems (like e-learning, social network, and production systems) are introduced with a new formal approach to knowledge system modelling.   The book, which offers a valuable resource for researchers engaged in complex system analysis, is the result of a unique cooperation between scientists from applied computer science (mainly from Poland) and leading system control theory researchers from the Russian Academy of Sciences’ Trapeznikov Institute of Control Sciences.

  4. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    International Nuclear Information System (INIS)

    Brower, Richard C.

    2016-01-01

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data that will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.

  5. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    Energy Technology Data Exchange (ETDEWEB)

    Brower, Richard C. [Boston Univ., MA (United States). Physics and ECE Depts.

    2016-11-08

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data that will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.

  6. A Nonparametric Test for Seasonal Unit Roots

    OpenAIRE

    Kunst, Robert M.

    2009-01-01

    Abstract: We consider a nonparametric test for the null of seasonal unit roots in quarterly time series that builds on the RUR (records unit root) test by Aparicio, Escribano, and Sipols. We find that the test concept is more promising than a formalization of visual aids such as plots by quarter. In order to cope with the sensitivity of the original RUR test to autocorrelation under its null of a unit root, we suggest an augmentation step by autoregression. We present some evidence on the siz...

  7. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    International Nuclear Information System (INIS)

    Grillo, C.; Suyu, S. H.; Umetsu, K.; Rosati, P.; Caminha, G. B.; Mercurio, A.; Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M.; Lombardi, M.; Gobat, R.; Coe, D.; Koekemoer, A. M.; Postman, M.; Zitrin, A.; Halkola, A.

    2015-01-01

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M * /M ☉ ) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide intriguing

  8. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    Energy Technology Data Exchange (ETDEWEB)

    Grillo, C. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Suyu, S. H.; Umetsu, K. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Rosati, P.; Caminha, G. B. [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara (Italy); Mercurio, A. [INAF - Osservatorio Astronomico di Capodimonte, Via Moiariello 16, I-80131 Napoli (Italy); Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M. [INAF - Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143, Trieste (Italy); Lombardi, M. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, I-20133 Milano (Italy); Gobat, R. [Laboratoire AIM-Paris-Saclay, CEA/DSM-CNRS-Universitè Paris Diderot, Irfu/Service d' Astrophysique, CEA Saclay, Orme des Merisiers, F-91191 Gif sur Yvette (France); Coe, D.; Koekemoer, A. M.; Postman, M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Zitrin, A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Halkola, A., E-mail: grillo@dark-cosmology.dk; and others

    2015-02-10

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M {sub *}/M {sub ☉}) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide

  9. A Bayesian nonparametric estimation of distributions and quantiles

    International Nuclear Information System (INIS)

    Poern, K.

    1988-11-01

    The report describes a Bayesian, nonparametric method for the estimation of a distribution function and its quantiles. The method, presupposing random sampling, is nonparametric, so the user has to specify a prior distribution on a space of distributions (and not on a parameter space). In the current application, where the method is used to estimate the uncertainty of a parametric calculational model, the Dirichlet prior distribution is to a large extent determined by the first batch of Monte Carlo-realizations. In this case the results of the estimation technique is very similar to the conventional empirical distribution function. The resulting posterior distribution is also Dirichlet, and thus facilitates the determination of probability (confidence) intervals at any given point in the space of interest. Another advantage is that also the posterior distribution of a specified quantitle can be derived and utilized to determine a probability interval for that quantile. The method was devised for use in the PROPER code package for uncertainty and sensitivity analysis. (orig.)

  10. Non-parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean-reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  11. Non-Parametric Estimation of Diffusion-Paths Using Wavelet Scaling Methods

    DEFF Research Database (Denmark)

    Høg, Esben

    2003-01-01

    In continuous time, diffusion processes have been used for modelling financial dynamics for a long time. For example the Ornstein-Uhlenbeck process (the simplest mean--reverting process) has been used to model non-speculative price processes. We discuss non--parametric estimation of these processes...

  12. International comparisons of the technical efficiency of the hospital sector: panel data analysis of OECD countries using parametric and non-parametric approaches.

    Science.gov (United States)

    Varabyova, Yauheniya; Schreyögg, Jonas

    2013-09-01

    There is a growing interest in the cross-country comparisons of the performance of national health care systems. The present work provides a comparison of the technical efficiency of the hospital sector using unbalanced panel data from OECD countries over the period 2000-2009. The estimation of the technical efficiency of the hospital sector is performed using nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Internal and external validity of findings is assessed by estimating the Spearman rank correlations between the results obtained in different model specifications. The panel-data analyses using two-step DEA and one-stage SFA show that countries, which have higher health care expenditure per capita, tend to have a more technically efficient hospital sector. Whether the expenditure is financed through private or public sources is not related to the technical efficiency of the hospital sector. On the other hand, the hospital sector in countries with higher income inequality and longer average hospital length of stay is less technically efficient. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  13. Nonparametric Bayesian density estimation on manifolds with applications to planar shapes.

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David B

    2010-12-01

    Statistical analysis on landmark-based shape spaces has diverse applications in morphometrics, medical diagnostics, machine vision and other areas. These shape spaces are non-Euclidean quotient manifolds. To conduct nonparametric inferences, one may define notions of centre and spread on this manifold and work with their estimates. However, it is useful to consider full likelihood-based methods, which allow nonparametric estimation of the probability density. This article proposes a broad class of mixture models constructed using suitable kernels on a general compact metric space and then on the planar shape space in particular. Following a Bayesian approach with a nonparametric prior on the mixing distribution, conditions are obtained under which the Kullback-Leibler property holds, implying large support and weak posterior consistency. Gibbs sampling methods are developed for posterior computation, and the methods are applied to problems in density estimation and classification with shape-based predictors. Simulation studies show improved estimation performance relative to existing approaches.

  14. Stochastic Frontier Models with Dependent Errors based on Normal and Exponential Margins || Modelos de frontera estocástica con errores dependientes basados en márgenes normal y exponencial

    Directory of Open Access Journals (Sweden)

    Gómez-Déniz, Emilio

    2017-06-01

    Full Text Available Following the recent work of Gómez-Déniz and Pérez-Rodríguez (2014, this paper extends the results obtained there to the normal-exponential distribution with dependence. Accordingly, the main aim of the present paper is to enhance stochastic production frontier and stochastic cost frontier modelling by proposing a bivariate distribution for dependent errors which allows us to nest the classical models. Closed-form expressions for the error term and technical efficiency are provided. An illustration using real data from the econometric literature is provided to show the applicability of the model proposed. || Continuando el reciente trabajo de Gómez-Déniz y Pérez-Rodríguez (2014, el presente artículo extiende los resultados obtenidos a la distribución normal-exponencial con dependencia. En consecuencia, el principal propósito de este artículo es mejorar el modelado de la frontera estocástica tanto de producción como de coste proponiendo para ello una distribución bivariante para errores dependientes que nos permitan encajar los modelos clásicos. Se obtienen las expresiones en forma cerrada para el término de error y la eficiencia técnica. Se ilustra la aplicabilidad del modelo propouesto usando datos reales existentes en la literatura econométrica.

  15. Energy not the only frontier

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-11-15

    While the push for big new machines to explore high energy frontiers makes the headlines, other avenues for physics progress are still being actively explored. To reflect these efforts, theorists and experimenters from the experiments committees for CERN's two major existing machines - the PS Proton Synchrotron and the SPS Super Proton Synchrotron – joined forces in study groups to look at long term physics perspectives. As one experimenter put it, 'there are frontiers of high complexity and high precision as well as high energy'. The groups' findings were aired at a special joint open meeting of the two committees at CERN on 31 August and 1 September.

  16. Nonparametric method for failures diagnosis in the actuating subsystem of aircraft control system

    Science.gov (United States)

    Terentev, M. N.; Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures diagnosis in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on analytical nonparametric one-step-ahead state prediction approach. This makes it possible to predict the behavior of unidentified and failure dynamic systems, to weaken the requirements to control signals, and to reduce the diagnostic time and problem complexity.

  17. Pushing Human Frontiers

    Science.gov (United States)

    Zubrin, Robert

    2005-01-01

    With human colonization of Mars, I think you will see a higher standard of civilization, just as America set a higher standard of civilization which then promulgated back into Europe. I think that if you want to maximize human potential, you need a higher standard of civilization, and that becomes an example that benefits everyone. Without an open frontier, closed world ideologies, such as the Malthus Theory, tend to come to the forefront. It is that there are limited resources; therefore, we are all in deadly competition with each other for the limited pot. The result is tyrannical and potentially genocidal regimes, and we've already seen this in the twentieth century. There s no truth in the Malthus Theory, because human beings are the creators of their resources. With every mouth comes a pair of hands and a brain. But if it seems to be true, you have a vector in this direction, and it is extremely unfortunate. It is only in a universe of infinite resources that all humans can be brothers and sisters. The fundamental question which affects humanity s sense of itself is whether the world is changeable or fixed. Are we the makers of our world or just its inhabitants? Some people have a view that they re living at the end of history within a world that s already defined, and there is no fundamental purpose to human life because there is nothing humans can do that matters. On the other hand, if humans understand their own role as the creators of their world, that s a much more healthy point of view. It raises the dignity of humans. Indeed, if we do establish a new branch of human civilization on Mars that grows in time and potency to the point where it cannot really settle Mars, but transforms Mars, and brings life to Mars, we will prove to everyone and for all time the precious and positive nature of the human species and every member of it.

  18. On Parametric (and Non-Parametric Variation

    Directory of Open Access Journals (Sweden)

    Neil Smith

    2009-11-01

    Full Text Available This article raises the issue of the correct characterization of ‘Parametric Variation’ in syntax and phonology. After specifying their theoretical commitments, the authors outline the relevant parts of the Principles–and–Parameters framework, and draw a three-way distinction among Universal Principles, Parameters, and Accidents. The core of the contribution then consists of an attempt to provide identity criteria for parametric, as opposed to non-parametric, variation. Parametric choices must be antecedently known, and it is suggested that they must also satisfy seven individually necessary and jointly sufficient criteria. These are that they be cognitively represented, systematic, dependent on the input, deterministic, discrete, mutually exclusive, and irreversible.

  19. Nonparametric predictive pairwise comparison with competing risks

    International Nuclear Information System (INIS)

    Coolen-Maturi, Tahani

    2014-01-01

    In reliability, failure data often correspond to competing risks, where several failure modes can cause a unit to fail. This paper presents nonparametric predictive inference (NPI) for pairwise comparison with competing risks data, assuming that the failure modes are independent. These failure modes could be the same or different among the two groups, and these can be both observed and unobserved failure modes. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. The focus is on the lower and upper probabilities for the event that the lifetime of a future unit from one group, say Y, is greater than the lifetime of a future unit from the second group, say X. The paper also shows how the two groups can be compared based on particular failure mode(s), and the comparison of the two groups when some of the competing risks are combined is discussed

  20. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  1. Testing a parametric function against a nonparametric alternative in IV and GMM settings

    DEFF Research Database (Denmark)

    Gørgens, Tue; Wurtz, Allan

    This paper develops a specification test for functional form for models identified by moment restrictions, including IV and GMM settings. The general framework is one where the moment restrictions are specified as functions of data, a finite-dimensional parameter vector, and a nonparametric real ...

  2. Assessing pupil and school performance by non-parametric and parametric techniques

    NARCIS (Netherlands)

    de Witte, K.; Thanassoulis, E.; Simpson, G.; Battisti, G.; Charlesworth-May, A.

    2010-01-01

    This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall

  3. Analyzing cost efficient production behavior under economies of scope : A nonparametric methodology

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2008-01-01

    In designing a production model for firms that generate multiple outputs, we take as a starting point that such multioutput production refers to economies of scope, which in turn originate from joint input use and input externalities. We provide a nonparametric characterization of cost-efficient

  4. Analyzing Cost Efficient Production Behavior Under Economies of Scope : A Nonparametric Methodology

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2006-01-01

    In designing a production model for firms that generate multiple outputs, we take as a starting point that such multi-output production refers to economies of scope, which in turn originate from joint input use and input externalities. We provide a nonparametric characterization of cost efficient

  5. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    in the observations are corrected. These are errors such as: tilt in the leveling of the sensor, shadowing from surrounding objects, clipping and saturation in the signal processing, and errors from dirt and wear. The method is based on a statistical non-parametric clear-sky model which is applied to both...

  6. The two frontiers of physics

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1986-05-15

    In March at Garching, near Munich, physicists from different walks of life together took another hard look at the two major frontiers of physics – the very large and the infinitesimally small. Organized jointly by CERN and the European Southern Observatory (ESO), the Garching 'Symposium on Cosmology, Astronomy and Fundamental Physics' was the second in a series launched at CERN in November 1983.

  7. Prior processes and their applications nonparametric Bayesian estimation

    CERN Document Server

    Phadia, Eswar G

    2016-01-01

    This book presents a systematic and comprehensive treatment of various prior processes that have been developed over the past four decades for dealing with Bayesian approach to solving selected nonparametric inference problems. This revised edition has been substantially expanded to reflect the current interest in this area. After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process. It subsequently discusses various neutral to right type processes, including gamma and extended gamma, beta and beta-Stacy processes, and then describes the Chinese Restaurant, Indian Buffet and infinite gamma-Poisson processes, which prove to be very useful in areas such as machine learning, information retrieval and featural modeling. Tailfree and P...

  8. Nonparametric estimation of benchmark doses in environmental risk assessment

    Science.gov (United States)

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Summary An important statistical objective in environmental risk analysis is estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a pre-specified benchmark response in a dose-response experiment. In such settings, representations of the risk are traditionally based on a parametric dose-response model. It is a well-known concern, however, that if the chosen parametric form is misspecified, inaccurate and possibly unsafe low-dose inferences can result. We apply a nonparametric approach for calculating benchmark doses, based on an isotonic regression method for dose-response estimation with quantal-response data (Bhattacharya and Kong, 2007). We determine the large-sample properties of the estimator, develop bootstrap-based confidence limits on the BMDs, and explore the confidence limits’ small-sample properties via a short simulation study. An example from cancer risk assessment illustrates the calculations. PMID:23914133

  9. Indoor Positioning Using Nonparametric Belief Propagation Based on Spanning Trees

    Directory of Open Access Journals (Sweden)

    Savic Vladimir

    2010-01-01

    Full Text Available Nonparametric belief propagation (NBP is one of the best-known methods for cooperative localization in sensor networks. It is capable of providing information about location estimation with appropriate uncertainty and to accommodate non-Gaussian distance measurement errors. However, the accuracy of NBP is questionable in loopy networks. Therefore, in this paper, we propose a novel approach, NBP based on spanning trees (NBP-ST created by breadth first search (BFS method. In addition, we propose a reliable indoor model based on obtained measurements in our lab. According to our simulation results, NBP-ST performs better than NBP in terms of accuracy and communication cost in the networks with high connectivity (i.e., highly loopy networks. Furthermore, the computational and communication costs are nearly constant with respect to the transmission radius. However, the drawbacks of proposed method are a little bit higher computational cost and poor performance in low-connected networks.

  10. Multi-Directional Non-Parametric Analysis of Agricultural Efficiency

    DEFF Research Database (Denmark)

    Balezentis, Tomas

    This thesis seeks to develop methodologies for assessment of agricultural efficiency and employ them to Lithuanian family farms. In particular, we focus on three particular objectives throughout the research: (i) to perform a fully non-parametric analysis of efficiency effects, (ii) to extend...... to the Multi-Directional Efficiency Analysis approach when the proposed models were employed to analyse empirical data of Lithuanian family farm performance, we saw substantial differences in efficiencies associated with different inputs. In particular, assets appeared to be the least efficiently used input...... relative to labour, intermediate consumption and land (in some cases land was not treated as a discretionary input). These findings call for further research on relationships among financial structure, investment decisions, and efficiency in Lithuanian family farms. Application of different techniques...

  11. Landscape of Future Accelerators at the Energy and Intensity Frontier

    Energy Technology Data Exchange (ETDEWEB)

    Syphers, M. J. [Northern Illinois U.; Chattopadhyay, S. [Northern Illinois U.

    2016-11-21

    An overview is provided of the currently envisaged landscape of charged particle accelerators at the energy and intensity frontiers to explore particle physics beyond the standard model via 1-100 TeV-scale lepton and hadron colliders and multi-Megawatt proton accelerators for short- and long- baseline neutrino experiments. The particle beam physics, associated technological challenges and progress to date for these accelerator facilities (LHC, HL-LHC, future 100 TeV p-p colliders, Tev-scale linear and circular electron-positron colliders, high intensity proton accelerator complex PIP-II for DUNE and future upgrade to PIP-III) are outlined. Potential and prospects for advanced “nonlinear dynamic techniques” at the multi-MW level intensity frontier and advanced “plasma- wakefield-based techniques” at the TeV-scale energy frontier and are also described.

  12. Nonparametric Bayesian inference for multidimensional compound Poisson processes

    NARCIS (Netherlands)

    Gugushvili, S.; van der Meulen, F.; Spreij, P.

    2015-01-01

    Given a sample from a discretely observed multidimensional compound Poisson process, we study the problem of nonparametric estimation of its jump size density r0 and intensity λ0. We take a nonparametric Bayesian approach to the problem and determine posterior contraction rates in this context,

  13. Nonparametric analysis of blocked ordered categories data: some examples revisited

    Directory of Open Access Journals (Sweden)

    O. Thas

    2006-08-01

    Full Text Available Nonparametric analysis for general block designs can be given by using the Cochran-Mantel-Haenszel (CMH statistics. We demonstrate this with four examples and note that several well-known nonparametric statistics are special cases of CMH statistics.

  14. Nonparametric statistics with applications to science and engineering

    CERN Document Server

    Kvam, Paul H

    2007-01-01

    A thorough and definitive book that fully addresses traditional and modern-day topics of nonparametric statistics This book presents a practical approach to nonparametric statistical analysis and provides comprehensive coverage of both established and newly developed methods. With the use of MATLAB, the authors present information on theorems and rank tests in an applied fashion, with an emphasis on modern methods in regression and curve fitting, bootstrap confidence intervals, splines, wavelets, empirical likelihood, and goodness-of-fit testing. Nonparametric Statistics with Applications to Science and Engineering begins with succinct coverage of basic results for order statistics, methods of categorical data analysis, nonparametric regression, and curve fitting methods. The authors then focus on nonparametric procedures that are becoming more relevant to engineering researchers and practitioners. The important fundamental materials needed to effectively learn and apply the discussed methods are also provide...

  15. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    OpenAIRE

    Li, Zhanchao; Gu, Chongshi; Wu, Zhongru

    2013-01-01

    The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...

  16. Nonparametric methods in actigraphy: An update

    Directory of Open Access Journals (Sweden)

    Bruno S.B. Gonçalves

    2014-09-01

    Full Text Available Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm results for each time interval. Simulated data showed that (1 synchronization analysis depends on sample size, and (2 fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep–wake cycle fragmentation and synchronization.

  17. Nonparametric tests for equality of psychometric functions.

    Science.gov (United States)

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2017-12-07

    Many empirical studies measure psychometric functions (curves describing how observers' performance varies with stimulus magnitude) because these functions capture the effects of experimental conditions. To assess these effects, parametric curves are often fitted to the data and comparisons are carried out by testing for equality of mean parameter estimates across conditions. This approach is parametric and, thus, vulnerable to violations of the implied assumptions. Furthermore, testing for equality of means of parameters may be misleading: Psychometric functions may vary meaningfully across conditions on an observer-by-observer basis with no effect on the mean values of the estimated parameters. Alternative approaches to assess equality of psychometric functions per se are thus needed. This paper compares three nonparametric tests that are applicable in all situations of interest: The existing generalized Mantel-Haenszel test, a generalization of the Berry-Mielke test that was developed here, and a split variant of the generalized Mantel-Haenszel test also developed here. Their statistical properties (accuracy and power) are studied via simulation and the results show that all tests are indistinguishable as to accuracy but they differ non-uniformly as to power. Empirical use of the tests is illustrated via analyses of published data sets and practical recommendations are given. The computer code in MATLAB and R to conduct these tests is available as Electronic Supplemental Material.

  18. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  19. Bootstrapping the economy -- a non-parametric method of generating consistent future scenarios

    OpenAIRE

    Müller, Ulrich A; Bürgi, Roland; Dacorogna, Michel M

    2004-01-01

    The fortune and the risk of a business venture depends on the future course of the economy. There is a strong demand for economic forecasts and scenarios that can be applied to planning and modeling. While there is an ongoing debate on modeling economic scenarios, the bootstrapping (or resampling) approach presented here has several advantages. As a non-parametric method, it directly relies on past market behaviors rather than debatable assumptions on models and parameters. Simultaneous dep...

  20. Energy not the only frontier

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    While the push for big new machines to explore high energy frontiers makes the headlines, other avenues for physics progress are still being actively explored. To reflect these efforts, theorists and experimenters from the experiments committees for CERN's two major existing machines - the PS Proton Synchrotron and the SPS Super Proton Synchrotron – joined forces in study groups to look at long term physics perspectives. As one experimenter put it, 'there are frontiers of high complexity and high precision as well as high energy'. The groups' findings were aired at a special joint open meeting of the two committees at CERN on 31 August and 1 September

  1. Analysing the length of care episode after hip fracture: a nonparametric and a parametric Bayesian approach.

    Science.gov (United States)

    Riihimäki, Jaakko; Sund, Reijo; Vehtari, Aki

    2010-06-01

    Effective utilisation of limited resources is a challenge for health care providers. Accurate and relevant information extracted from the length of stay distributions is useful for management purposes. Patient care episodes can be reconstructed from the comprehensive health registers, and in this paper we develop a Bayesian approach to analyse the length of care episode after a fractured hip. We model the large scale data with a flexible nonparametric multilayer perceptron network and with a parametric Weibull mixture model. To assess the performances of the models, we estimate expected utilities using predictive density as a utility measure. Since the model parameters cannot be directly compared, we focus on observables, and estimate the relevances of patient explanatory variables in predicting the length of stay. To demonstrate how the use of the nonparametric flexible model is advantageous for this complex health care data, we also study joint effects of variables in predictions, and visualise nonlinearities and interactions found in the data.

  2. Energy not the only frontier

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    In the major world areas active In high energy physics, proposals have been prepared for new machines to manufacture intense beams of strongly interacting particles (hadrons) to complement the physics coming in from the high energy frontier. An information session on these plans for intense hadron facilities was included in the Third International Conference on the Intersections between Particle and Nuclear Physics, held in Rockport, Maine, in May

  3. The two frontiers of physics

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    In March at Garching, near Munich, physicists from different walks of life together took another hard look at the two major frontiers of physics – the very large and the infinitesimally small. Organized jointly by CERN and the European Southern Observatory (ESO), the Garching 'Symposium on Cosmology, Astronomy and Fundamental Physics' was the second in a series launched at CERN in November 1983

  4. Frontiers of finance: evolution and efficient markets.

    Science.gov (United States)

    Farmer, J D; Lo, A W

    1999-08-31

    In this review article, we explore several recent advances in the quantitative modeling of financial markets. We begin with the Efficient Markets Hypothesis and describe how this controversial idea has stimulated a number of new directions of research, some focusing on more elaborate mathematical models that are capable of rationalizing the empirical facts, others taking a completely different tack in rejecting rationality altogether. One of the most promising directions is to view financial markets from a biological perspective and, specifically, within an evolutionary framework in which markets, instruments, institutions, and investors interact and evolve dynamically according to the "law" of economic selection. Under this view, financial agents compete and adapt, but they do not necessarily do so in an optimal fashion. Evolutionary and ecological models of financial markets is truly a new frontier whose exploration has just begun.

  5. Non-parametric Tuning of PID Controllers A Modified Relay-Feedback-Test Approach

    CERN Document Server

    Boiko, Igor

    2013-01-01

    The relay feedback test (RFT) has become a popular and efficient  tool used in process identification and automatic controller tuning. Non-parametric Tuning of PID Controllers couples new modifications of classical RFT with application-specific optimal tuning rules to form a non-parametric method of test-and-tuning. Test and tuning are coordinated through a set of common parameters so that a PID controller can obtain the desired gain or phase margins in a system exactly, even with unknown process dynamics. The concept of process-specific optimal tuning rules in the nonparametric setup, with corresponding tuning rules for flow, level pressure, and temperature control loops is presented in the text.   Common problems of tuning accuracy based on parametric and non-parametric approaches are addressed. In addition, the text treats the parametric approach to tuning based on the modified RFT approach and the exact model of oscillations in the system under test using the locus of a perturbedrelay system (LPRS) meth...

  6. Bioactive glasses: Frontiers and challenges

    Directory of Open Access Journals (Sweden)

    Larry L. Hench

    2015-11-01

    Full Text Available Bioactive glasses were discovered in 1969 and provided for the first time an alternative to nearly inert implant materials. Bioglass formed a rapid, strong and stable bond with host tissues. This article examines the frontiers of research crossed to achieve clinical use of bioactive glasses and glass-ceramics. In the 1980’s it was discovered that bioactive glasses could be used in particulate form to stimulate osteogenesis, which thereby led to the concept of regeneration of tissues. Later, it was discovered that the dissolution ions from the glasses behaved like growth factors, providing signals to the cells. This article summarizes the frontiers of knowledge crossed during four eras of development of bioactive glasses that have led from concept of bioactivity to widespread clinical and commercial use, with emphasis on the first composition, 45S5 Bioglass®. The four eras are: a discovery; b clinical application; c tissue regeneration; and d innovation. Questions still to be answered for the fourth era are included to stimulate innovation in the field and exploration of new frontiers that can be the basis for a general theory of bioactive stimulation of regeneration of tissues and application to numerous clinical needs.

  7. Inference from concave stochastic frontiers and the covariance of firm efficiency measures across firms

    International Nuclear Information System (INIS)

    Dashti, Imad

    2003-01-01

    This paper uses a Bayesian stochastic frontier model to obtain confidence intervals on firm efficiency measures of electric utilities rather than the point estimates reported in most previous studies. Results reveal that the stochastic frontier model yields imprecise measures of firm efficiency. However, the application produces much more precise inference on pairwise efficiency comparisons of firms due to a sometimes strong positive covariance of efficiency measures across firms. In addition, we examine the sensitivity to functional form by repeating the analysis for Cobb-Douglas, translog and Fourier frontiers, with and without imposing monotonicity and concavity

  8. A tradeoff frontier for global nitrogen use and cereal production

    International Nuclear Information System (INIS)

    Mueller, Nathaniel D; West, Paul C; Gerber, James S; MacDonald, Graham K; Foley, Jonathan A; Polasky, Stephen

    2014-01-01

    Nitrogen fertilizer use across the world’s croplands enables high-yielding agricultural production, but does so at considerable environmental cost. Imbalances between nitrogen applied and nitrogen used by crops contributes to excess nitrogen in the environment, with negative consequences for water quality, air quality, and climate change. Here we utilize crop input-yield models to investigate how to minimize nitrogen application while achieving crop production targets. We construct a tradeoff frontier that estimates the minimum nitrogen fertilizer needed to produce a range of maize, wheat, and rice production levels. Additionally, we explore potential environmental consequences by calculating excess nitrogen along the frontier using a soil surface nitrogen balance model. We find considerable opportunity to achieve greater production and decrease both nitrogen application and post-harvest excess nitrogen. Our results suggest that current (circa 2000) levels of cereal production could be achieved with ∼50% less nitrogen application and ∼60% less excess nitrogen. If current global nitrogen application were held constant but spatially redistributed, production could increase ∼30%. If current excess nitrogen were held constant, production could increase ∼40%. Efficient spatial patterns of nitrogen use on the frontier involve substantial reductions in many high-use areas and moderate increases in many low-use areas. Such changes may be difficult to achieve in practice due to infrastructure, economic, or political constraints. Increases in agronomic efficiency would expand the frontier to allow greater production and environmental gains

  9. Multi-sample nonparametric treatments comparison in medical ...

    African Journals Online (AJOL)

    Multi-sample nonparametric treatments comparison in medical follow-up study with unequal observation processes through simulation and bladder tumour case study. P. L. Tan, N.A. Ibrahim, M.B. Adam, J. Arasan ...

  10. Adaptive nonparametric estimation for L\\'evy processes observed at low frequency

    OpenAIRE

    Kappus, Johanna

    2013-01-01

    This article deals with adaptive nonparametric estimation for L\\'evy processes observed at low frequency. For general linear functionals of the L\\'evy measure, we construct kernel estimators, provide upper risk bounds and derive rates of convergence under regularity assumptions. Our focus lies on the adaptive choice of the bandwidth, using model selection techniques. We face here a non-standard problem of model selection with unknown variance. A new approach towards this problem is proposed, ...

  11. A nonparametric approach to medical survival data: Uncertainty in the context of risk in mortality analysis

    International Nuclear Information System (INIS)

    Janurová, Kateřina; Briš, Radim

    2014-01-01

    Medical survival right-censored data of about 850 patients are evaluated to analyze the uncertainty related to the risk of mortality on one hand and compare two basic surgery techniques in the context of risk of mortality on the other hand. Colorectal data come from patients who underwent colectomy in the University Hospital of Ostrava. Two basic surgery operating techniques are used for the colectomy: either traditional (open) or minimally invasive (laparoscopic). Basic question arising at the colectomy operation is, which type of operation to choose to guarantee longer overall survival time. Two non-parametric approaches have been used to quantify probability of mortality with uncertainties. In fact, complement of the probability to one, i.e. survival function with corresponding confidence levels is calculated and evaluated. First approach considers standard nonparametric estimators resulting from both the Kaplan–Meier estimator of survival function in connection with Greenwood's formula and the Nelson–Aalen estimator of cumulative hazard function including confidence interval for survival function as well. The second innovative approach, represented by Nonparametric Predictive Inference (NPI), uses lower and upper probabilities for quantifying uncertainty and provides a model of predictive survival function instead of the population survival function. The traditional log-rank test on one hand and the nonparametric predictive comparison of two groups of lifetime data on the other hand have been compared to evaluate risk of mortality in the context of mentioned surgery techniques. The size of the difference between two groups of lifetime data has been considered and analyzed as well. Both nonparametric approaches led to the same conclusion, that the minimally invasive operating technique guarantees the patient significantly longer survival time in comparison with the traditional operating technique

  12. Speaker Linking and Applications using Non-Parametric Hashing Methods

    Science.gov (United States)

    2016-09-08

    nonparametric estimate of a multivariate density function,” The Annals of Math- ematical Statistics , vol. 36, no. 3, pp. 1049–1051, 1965. [9] E. A. Patrick...Speaker Linking and Applications using Non-Parametric Hashing Methods† Douglas Sturim and William M. Campbell MIT Lincoln Laboratory, Lexington, MA...with many approaches [1, 2]. For this paper, we focus on using i-vectors [2], but the methods apply to any embedding. For the task of speaker QBE and

  13. Non-parametric Bayesian networks: Improving theory and reviewing applications

    International Nuclear Information System (INIS)

    Hanea, Anca; Morales Napoles, Oswaldo; Ababei, Dan

    2015-01-01

    Applications in various domains often lead to high dimensional dependence modelling. A Bayesian network (BN) is a probabilistic graphical model that provides an elegant way of expressing the joint distribution of a large number of interrelated variables. BNs have been successfully used to represent uncertain knowledge in a variety of fields. The majority of applications use discrete BNs, i.e. BNs whose nodes represent discrete variables. Integrating continuous variables in BNs is an area fraught with difficulty. Several methods that handle discrete-continuous BNs have been proposed in the literature. This paper concentrates only on one method called non-parametric BNs (NPBNs). NPBNs were introduced in 2004 and they have been or are currently being used in at least twelve professional applications. This paper provides a short introduction to NPBNs, a couple of theoretical advances, and an overview of applications. The aim of the paper is twofold: one is to present the latest improvements of the theory underlying NPBNs, and the other is to complement the existing overviews of BNs applications with the NPNBs applications. The latter opens the opportunity to discuss some difficulties that applications pose to the theoretical framework and in this way offers some NPBN modelling guidance to practitioners. - Highlights: • The paper gives an overview of the current NPBNs methodology. • We extend the NPBN methodology by relaxing the conditions of one of its fundamental theorems. • We propose improvements of the data mining algorithm for the NPBNs. • We review the professional applications of the NPBNs.

  14. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    Science.gov (United States)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  15. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  16. A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems

    Science.gov (United States)

    Merkatas, Christos; Kaloudis, Konstantinos; Hatjispyros, Spyridon J.

    2017-06-01

    We propose a Bayesian nonparametric mixture model for the reconstruction and prediction from observed time series data, of discretized stochastic dynamical systems, based on Markov Chain Monte Carlo methods. Our results can be used by researchers in physical modeling interested in a fast and accurate estimation of low dimensional stochastic models when the size of the observed time series is small and the noise process (perhaps) is non-Gaussian. The inference procedure is demonstrated specifically in the case of polynomial maps of an arbitrary degree and when a Geometric Stick Breaking mixture process prior over the space of densities, is applied to the additive errors. Our method is parsimonious compared to Bayesian nonparametric techniques based on Dirichlet process mixtures, flexible and general. Simulations based on synthetic time series are presented.

  17. Bayesian nonparametric estimation of continuous monotone functions with applications to dose-response analysis.

    Science.gov (United States)

    Bornkamp, Björn; Ickstadt, Katja

    2009-03-01

    In this article, we consider monotone nonparametric regression in a Bayesian framework. The monotone function is modeled as a mixture of shifted and scaled parametric probability distribution functions, and a general random probability measure is assumed as the prior for the mixing distribution. We investigate the choice of the underlying parametric distribution function and find that the two-sided power distribution function is well suited both from a computational and mathematical point of view. The model is motivated by traditional nonlinear models for dose-response analysis, and provides possibilities to elicitate informative prior distributions on different aspects of the curve. The method is compared with other recent approaches to monotone nonparametric regression in a simulation study and is illustrated on a data set from dose-response analysis.

  18. A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems.

    Science.gov (United States)

    Merkatas, Christos; Kaloudis, Konstantinos; Hatjispyros, Spyridon J

    2017-06-01

    We propose a Bayesian nonparametric mixture model for the reconstruction and prediction from observed time series data, of discretized stochastic dynamical systems, based on Markov Chain Monte Carlo methods. Our results can be used by researchers in physical modeling interested in a fast and accurate estimation of low dimensional stochastic models when the size of the observed time series is small and the noise process (perhaps) is non-Gaussian. The inference procedure is demonstrated specifically in the case of polynomial maps of an arbitrary degree and when a Geometric Stick Breaking mixture process prior over the space of densities, is applied to the additive errors. Our method is parsimonious compared to Bayesian nonparametric techniques based on Dirichlet process mixtures, flexible and general. Simulations based on synthetic time series are presented.

  19. A nonparametric empirical Bayes framework for large-scale multiple testing.

    Science.gov (United States)

    Martin, Ryan; Tokdar, Surya T

    2012-07-01

    We propose a flexible and identifiable version of the 2-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the nonnull cases. We use a computationally efficient predictive recursion (PR) marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the nonnull density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.

  20. Network frontier as a metaphor and myth

    Directory of Open Access Journals (Sweden)

    N V Plotichkina

    2017-12-01

    Full Text Available This article considers spatial metaphors of the Internet and the possibility to extrapolate the frontier thesis of F. Turner on the electronic space. The authors believe that information and communication technologies and the digital world have become new spaces for the expansion of states or individuals. That is why there are ongoing scientific debates on the limits and potential of western and electronic frontiers’ metaphors for analytical description of the digital space. The metaphor of the Internet as a western frontier is quite controversial; many authors prefer the electronic frontier analogy as more heuristic and valid for constructing metaphors of the digital reality. The network frontier is defined as a dynamic, elastic and permeable border of social and cultural practices of the network society. The authors estimate the heuristic potential of the concept ‘network frontier’ developed on the basis of integration of the frontier theory and the concept ‘network society’, taking into account the effects of globalization for the study of elastic, permeable and movable border of the network landscape. In the digital world, the spatiality transforms, the geography of the Internet network determines the metamorphosis of the frontier as a contact zone between online and offline spaces, which is dynamic, innovative, encourages mobility, and its permeability depends on the digital competence of citizens. The authors explain the mythology of western and electronic frontier; name the main network frontier myths related to the rhetoric of western frontier myth; describe the main components of the western frontier myth associated with the idea of American exceptionalism; and conclude with the identification of nowadays myths about frontier-men and the online space they master.

  1. HERA the new frontier

    International Nuclear Information System (INIS)

    Feltesse, J.

    1992-01-01

    The large storage ring Hardon-Electron-Ring Accelerator (HERA) has been completed at DESY. The first collisions for physics studies are scheduled for spring 1992. HERA is the first electron-proton storage ring ever built. Electrons and protons of nominal energies, E e = 30 GeV and E p = 820 GeV will collide at center of mass energy 314 GeV. The beam energies can be varied, while maintaining luminosity, over the range E e = 10-35 GeV and E p = 300-1000 GeV. The theoretical motivations and questions raised by quantum chromodynamics in this new demain are approached at a phenomenological level. The measurement of x the momentum fraction carried by the struck quark inside the proton, and Q 2 , are reviewed over the accessible domain at HERA energies from the scattered electron and hadron flow laboratory variables. The various experimental methods to extract the gluon distribution are described. Other physics opportunities to test the standard model are briefly given. Finally, a few examples of processes not expected by the standard model, but within the reach of HERA energetics, are outlined

  2. Efficient nonparametric n -body force fields from machine learning

    Science.gov (United States)

    Glielmo, Aldo; Zeni, Claudio; De Vita, Alessandro

    2018-05-01

    We provide a definition and explicit expressions for n -body Gaussian process (GP) kernels, which can learn any interatomic interaction occurring in a physical system, up to n -body contributions, for any value of n . The series is complete, as it can be shown that the "universal approximator" squared exponential kernel can be written as a sum of n -body kernels. These recipes enable the choice of optimally efficient force models for each target system, as confirmed by extensive testing on various materials. We furthermore describe how the n -body kernels can be "mapped" on equivalent representations that provide database-size-independent predictions and are thus crucially more efficient. We explicitly carry out this mapping procedure for the first nontrivial (three-body) kernel of the series, and we show that this reproduces the GP-predicted forces with meV /Å accuracy while being orders of magnitude faster. These results pave the way to using novel force models (here named "M-FFs") that are computationally as fast as their corresponding standard parametrized n -body force fields, while retaining the nonparametric character, the ease of training and validation, and the accuracy of the best recently proposed machine-learning potentials.

  3. Vikings and the Western Frontier

    OpenAIRE

    Wienberg, Jes

    2015-01-01

    The article investigates how and why the Vikings became world-famous. The point of departure is the World Exposition in Chicago in 1893, where an icon for the Viking, a replica of the Gokstad ship, arrived the very same day as Frederick Jackson Turner presented his frontier thesis. The origin of the word Viking, the romantic revival of the Viking, the creation of the Viking Age and the criticism of the Viking and the Viking Age is discussed. Finally the article argues that the Viking and the ...

  4. South African Homelands as Frontiers

    DEFF Research Database (Denmark)

    of frontier zones, the homelands emerge as areas in which the future of the South African postcolony is being renegotiated, contested and remade with hyper-real intensity. This is so because the many fault lines left over from apartheid (its loose ends, so to speak) – between white and black; between...... in these settings that the postcolonial promise of liberation and freedom must face its test. As such, the book offers highly nuanced and richly detailed analyses that go to the heart of the diverse dilemmas of post-apartheid South Africa as a whole, but simultaneously also provides in condensed form an extended...

  5. New Frontiers of Land Control

    DEFF Research Database (Denmark)

    Lee Peluso, Nancy; Lund, Christian

    2011-01-01

    rights, and territories created, extracted, produced, or protected on land. Primitive and on-going forms of accumulation, frontiers, enclosures, territories, grabs, and racializations have all been associated with mechanisms for land control. Agrarian environments have been transformed by processes of de...... analytic tools that had seemed to have timeless applicability with new frameworks, concepts, and theoretical tools. What difference does land control make? These contributions to the debates demonstrate that the answers have been shaped by conflicts, contexts, histories, and agency, as land has been...

  6. New frontiers in PDF determination

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Parton Distribution Functions (PDFs) are a crucial input at the LHC, their uncertainty often being the limiting factor in the accuracy of theoretical predictions. At the same time the LHC is delivering a number of precise measurements that have the potential to greatly constrain these functions. I will give an overview on the theory behind and on the state of the art of PDF determination. I will then mention the new theoretical and methodological challenges in modern PDF fits and explore the precision frontiers opened by the accuracy of the LHC data.

  7. Frontiers International Conference on Wastewater Treatment

    CERN Document Server

    2017-01-01

    This book describes the latest research advances, innovations, and applications in the field of water management and environmental engineering as presented by leading researchers, engineers, life scientists and practitioners from around the world at the Frontiers International Conference on Wastewater Treatment (FICWTM), held in Palermo, Italy in May 2017. The topics covered are highly diverse and include the physical processes of mixing and dispersion, biological developments and mathematical modeling, such as computational fluid dynamics in wastewater, MBBR and hybrid systems, membrane bioreactors, anaerobic digestion, reduction of greenhouse gases from wastewater treatment plants, and energy optimization. The contributions amply demonstrate that the application of cost-effective technologies for waste treatment and control is urgently needed so as to implement appropriate regulatory measures that ensure pollution prevention and remediation, safeguard public health, and preserve the environment. The contrib...

  8. Fundamental Physics at the Intensity Frontier

    CERN Document Server

    Hewett, J.L.; Brock, R.; Butler, J.N.; Casey, B.C.K.; Collar, J.; de Gouvea, A.; Essig, R.; Grossman, Y.; Haxton, W.; Jaros, J.A.; Jung, C.K.; Lu, Z.T.; Pitts, K.; Ligeti, Z.; Patterson, J.R.; Ramsey-Musolf, M.; Ritchie, J.L.; Roodman, A.; Scholberg, K.; Wagner, C.E.M.; Zeller, G.P.; Aefsky, S.; Afanasev, A.; Agashe, K.; Albright, C.; Alonso, J.; Ankenbrandt, C.; Aoki, M.; Arguelles, C.A.; Arkani-Hamed, N.; Armendariz, J.R.; Armendariz-Picon, C.; Arrieta Diaz, E.; Asaadi, J.; Asner, D.M.; Babu, K.S.; Bailey, K.; Baker, O.; Balantekin, B.; Baller, B.; Bass, M.; Batell, B.; Beacham, J.; Behr, J.; Berger, N.; Bergevin, M.; Berman, E.; Bernstein, R.; Bevan, A.J.; Bishai, M.; Blanke, M.; Blessing, S.; Blondel, A.; Blum, T.; Bock, G.; Bodek, A.; Bonvicini, G.; Bossi, F.; Boyce, J.; Breedon, R.; Breidenbach, M.; Brice, S.J.; Briere, R.A.; Brodsky, S.; Bromberg, C.; Bross, A.; Browder, T.E.; Bryman, D.A.; Buckley, M.; Burnstein, R.; Caden, E.; Campana, P.; Carlini, R.; Carosi, G.; Castromonte, C.; Cenci, R.; Chakaberia, I.; Chen, Mu-Chun; Cheng, C.H.; Choudhary, B.; Christ, N.H.; Christensen, E.; Christy, M.E.; Chupp, T.E.; Church, E.; Cline, D.B.; Coan, T.E.; Coloma, P.; Comfort, J.; Coney, L.; Cooper, J.; Cooper, R.J.; Cowan, R.; Cowen, D.F.; Cronin-Hennessy, D.; Datta, A.; Davies, G.S.; Demarteau, M.; DeMille, D.P.; Denig, A.; Dermisek, R.; Deshpande, A.; Dewey, M.S.; Dharmapalan, R.; Dhooghe, J.; Dietrich, M.R.; Diwan, M.; Djurcic, Z.; Dobbs, S.; Duraisamy, M.; Dutta, Bhaskar; Duyang, H.; Dwyer, D.A.; Eads, M.; Echenard, B.; Elliott, S.R.; Escobar, C.; Fajans, J.; Farooq, S.; Faroughy, C.; Fast, J.E.; Feinberg, B.; Felde, J.; Feldman, G.; Fierlinger, P.; Fileviez Perez, P.; Filippone, B.; Fisher, P.; Flemming, B.T.; Flood, K.T.; Forty, R.; Frank, M.J.; Freyberger, A.; Friedland, A.; Gandhi, R.; Ganezer, K.S.; Garcia, A.; Garcia, F.G.; Gardner, S.; Garrison, L.; Gasparian, A.; Geer, S.; Gehman, V.M.; Gershon, T.; Gilchriese, M.; Ginsberg, C.; Gogoladze, I.; Gonderinger, M.; Goodman, M.; Gould, H.; Graham, M.; Graham, P.W.; Gran, R.; Grange, J.; Gratta, G.; Green, J.P.; Greenlee, H.; Group, R.C.; Guardincerri, E.; Gudkov, V.; Guenette, R.; Haas, A.; Hahn, A.; Han, T.; Handler, T.; Hardy, J.C.; Harnik, R.; Harris, D.A.; Harris, F.A.; Harris, P.G.; Hartnett, J.; He, B.; Heckel, B.R.; Heeger, K.M.; Henderson, S.; Hertzog, D.; Hill, R.; Hinds, E.A.; Hitlin, D.G.; Holt, R.J.; Holtkamp, N.; Horton-Smith, G.; Huber, P.; Huelsnitz, W.; Imber, J.; Irastorza, I.; Jaeckel, J.; Jaegle, I.; James, C.; Jawahery, A.; Jensen, D.; Jessop, C.P.; Jones, B.; Jostlein, H.; Junk, T.; Kagan, A.L.; Kalita, M.; Kamyshkov, Y.; Kaplan, D.M.; Karagiorgi, G.; Karle, A.; Katori, T.; Kayser, B.; Kephart, R.; Kettell, S.; Kim, Y.K.; Kirby, M.; Kirch, K.; Klein, J.; Kneller, J.; Kobach, A.; Kohl, M.; Kopp, J.; Kordosky, M.; Korsch, W.; Kourbanis, I.; Krisch, A.D.; Krizan, P.; Kronfeld, A.S.; Kulkarni, S.; Kumar, K.S.; Kuno, Y.; Kutter, T.; Lachenmaier, T.; Lamm, M.; Lancaster, J.; Lancaster, M.; Lane, C.; Lang, K.; Langacker, P.; Lazarevic, S.; Le, T.; Lee, K.; Lesko, K.T.; Li, Y.; Lindgren, M.; Lindner, A.; Link, J.; Lissauer, D.; Littenberg, L.S.; Littlejohn, B.; Liu, C.Y.; Loinaz, W.; Lorenzon, W.; Louis, W.C.; Lozier, J.; Ludovici, L.; Lueking, L.; Lunardini, C.; MacFarlane, D.B.; Machado, P.A.N.; Mackenzie, P.B.; Maloney, J.; Marciano, W.J.; Marsh, W.; Marshak, M.; Martin, J.W.; Mauger, C.; McFarland, K.S.; McGrew, C.; McLaughlin, G.; McKeen, D.; McKeown, R.; Meadows, B.T.; Mehdiyev, R.; Melconian, D.; Merkel, H.; Messier, M.; Miller, J.P.; Mills, G.; Minamisono, U.K.; Mishra, S.R.; Mocioiu, I.; Sher, S.Moed; Mohapatra, R.N.; Monreal, B.; Moore, C.D.; Morfin, J.G.; Mousseau, J.; Moustakas, L.A.; Mueller, G.; Mueller, P.; Muether, M.; Mumm, H.P.; Munger, C.; Murayama, H.; Nath, P.; Naviliat-Cuncin, O.; Nelson, J.K.; Neuffer, D.; Nico, J.S.; Norman, A.; Nygren, D.; Obayashi, Y.; O'Connor, T.P.; Okada, Y.; Olsen, J.; Orozco, L.; Orrell, J.L.; Osta, J.; Pahlka, B.; Paley, J.; Papadimitriou, V.; Papucci, M.; Parke, S.; Parker, R.H.; Parsa, Z.; Partyka, K.; Patch, A.; Pati, J.C.; Patterson, R.B.; Pavlovic, Z.; Paz, Gil; Perdue, G.N.; Perevalov, D.; Perez, G.; Petti, R.; Pettus, W.; Piepke, A.; Pivovaroff, M.; Plunkett, R.; Polly, C.C.; Pospelov, M.; Povey, R.; Prakesh, A.; Purohit, M.V.; Raby, S.; Raaf, J.L.; Rajendran, R.; Rajendran, S.; Rameika, G.; Ramsey, R.; Rashed, A.; Ratcliff, B.N.; Rebel, B.; Redondo, J.; Reimer, P.; Reitzner, D.; Ringer, F.; Ringwald, A.; Riordan, S.; Roberts, B.L.; Roberts, D.A.; Robertson, R.; Robicheaux, F.; Rominsky, M.; Roser, R.; Rosner, J.L.; Rott, C.; Rubin, P.; Saito, N.; Sanchez, M.; Sarkar, S.; Schellman, H.; Schmidt, B.; Schmitt, M.; Schmitz, D.W.; Schneps, J.; Schopper, A.; Schuster, P.; Schwartz, A.J.; Schwarz, M.; Seeman, J.; Semertzidis, Y.K.; Seth, K.K.; Shafi, Q.; Shanahan, P.; Sharma, R.; Sharpe, S.R.; Shiozawa, M.; Shiltsev, V.; Sigurdson, K.; Sikivie, P.; Singh, J.; Sivers, D.; Skwarnicki, T.; Smith, N.; Sobczyk, J.; Sobel, H.; Soderberg, M.; Song, Y.H.; Soni, A.; Souder, P.; Sousa, A.; Spitz, J.; Stancari, M.; Stavenga, G.C.; Steffen, J.H.; Stepanyan, S.; Stoeckinger, D.; Stone, S.; Strait, J.; Strassler, M.; Sulai, I.A.; Sundrum, R.; Svoboda, R.; Szczerbinska, B.; Szelc, A.; Takeuchi, T.; Tanedo, P.; Taneja, S.; Tang, J.; Tanner, D.B.; Tayloe, R.; Taylor, I.; Thomas, J.; Thorn, C.; Tian, X.; Tice, B.G.; Tobar, M.; Tolich, N.; Toro, N.; Towner, I.S.; Tsai, Y.; Tschirhart, R.; Tunnell, C.D.; Tzanov, M.; Upadhye, A.; Urheim, J.; Vahsen, S.; Vainshtein, A.; Valencia, E.; Van de Water, R.G.; Van de Water, R.S.; Velasco, M.; Vogel, J.; Vogel, P.; Vogelsang, W.; Wah, Y.W.; Walker, D.; Weiner, N.; Weltman, A.; Wendell, R.; Wester, W.; Wetstein, M.; White, C.; Whitehead, L.; Whitmore, J.; Widmann, E.; Wiedemann, G.; Wilkerson, J.; Wilkinson, G.; Wilson, P.; Wilson, R.J.; Winter, W.; Wise, M.B.; Wodin, J.; Wojcicki, S.; Wojtsekhowski, B.; Wongjirad, T.; Worcester, E.; Wurtele, J.; Xin, T.; Xu, J.; Yamanaka, T.; Yamazaki, Y.; Yavin, I.; Yeck, J.; Yeh, M.; Yokoyama, M.; Yoo, J.; Young, A.; Zimmerman, E.; Zioutas, K.; Zisman, M.; Zupan, J.; Zwaska, R.; Intensity Frontier Workshop

    2012-01-01

    The Proceedings of the 2011 workshop on Fundamental Physics at the Intensity Frontier. Science opportunities at the intensity frontier are identified and described in the areas of heavy quarks, charged leptons, neutrinos, proton decay, new light weakly-coupled particles, and nucleons, nuclei, and atoms.

  9. Frontier differences and the global malmquist index

    DEFF Research Database (Denmark)

    Asmild, Mette

    2015-01-01

    This chapter reviews different ways of comparing the efficiency frontiers for subgroups within a data set, specifically program efficiency, the metatechnology (or technology gap) ratio and the global frontier difference index. The latter is subsequently used to define a global Malmquist index...

  10. JILA Science | Exploring the frontiers of physics

    Science.gov (United States)

    print logo Main menu Research Research Areas Research Highlights JILA Discoveries JILA Physics Frontier Institutes Give to JILA Search form Search Search Advanced JILA Sites: JILA Physics Frontier Center JILA Molecular Physics Biophysics Chemical Physics Laser Physics Nanoscience Precision Measurement Quantum

  11. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    Science.gov (United States)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  12. Scalable Bayesian nonparametric measures for exploring pairwise dependence via Dirichlet Process Mixtures.

    Science.gov (United States)

    Filippi, Sarah; Holmes, Chris C; Nieto-Barajas, Luis E

    2016-11-16

    In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets.

  13. New frontiers in mucositis.

    Science.gov (United States)

    Peterson, Douglas E; Keefe, Dorothy M; Sonis, Stephen T

    2012-01-01

    Mucositis is among the most debilitating side effects of radiotherapy, chemotherapy, and targeted anticancer therapy. Research continues to escalate regarding key issues such as etiopathology, incidence and severity across different mucosae, relationships between mucosal and nonmucosal toxicities, and risk factors. This approach is being translated into enhanced management strategies. Recent technology advances provide an important foundation for this continuum. For example, evolution of applied genomics is fostering development of new algorithms to rapidly screen genomewide single-nucleotide polymorphisms (SNPs) for patient-associated risk prediction. This modeling will permit individual tailoring of the most effective, least toxic treatment in the future. The evolution of novel cancer therapeutics is changing the mucositis toxicity profile. These agents can be associated with unique mechanisms of mucosal damage. Additional research is needed to optimally manage toxicity caused by agents such as mammalian target of rapamycin (mTOR) inhibitors and tyrosine kinase inhibitors, without reducing antitumor effect. There has similarly been heightened attention across the health professions regarding clinical practice guidelines for mucositis management in the years following the first published guidelines in 2004. New opportunities exist to more effectively interface this collective guideline portfolio by capitalizing upon novel technologies such as an Internet-based Wiki platform. Substantive progress thus continues across many domains associated with mucosal injury in oncology patients. In addition to enhancing oncology patient care, these advances are being integrated into high-impact educational and scientific venues including the National Cancer Institute Physician Data Query (PDQ) portfolio as well as a new Gordon Research Conference on mucosal health and disease scheduled for June 2013.

  14. Smooth semi-nonparametric (SNP) estimation of the cumulative incidence function.

    Science.gov (United States)

    Duc, Anh Nguyen; Wolbers, Marcel

    2017-08-15

    This paper presents a novel approach to estimation of the cumulative incidence function in the presence of competing risks. The underlying statistical model is specified via a mixture factorization of the joint distribution of the event type and the time to the event. The time to event distributions conditional on the event type are modeled using smooth semi-nonparametric densities. One strength of this approach is that it can handle arbitrary censoring and truncation while relying on mild parametric assumptions. A stepwise forward algorithm for model estimation and adaptive selection of smooth semi-nonparametric polynomial degrees is presented, implemented in the statistical software R, evaluated in a sequence of simulation studies, and applied to data from a clinical trial in cryptococcal meningitis. The simulations demonstrate that the proposed method frequently outperforms both parametric and nonparametric alternatives. They also support the use of 'ad hoc' asymptotic inference to derive confidence intervals. An extension to regression modeling is also presented, and its potential and challenges are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  15. The Kernel Mixture Network: A Nonparametric Method for Conditional Density Estimation of Continuous Random Variables

    OpenAIRE

    Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric

    2017-01-01

    This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...

  16. Driving Style Analysis Using Primitive Driving Patterns With Bayesian Nonparametric Approaches

    OpenAIRE

    Wang, Wenshuo; Xi, Junqiang; Zhao, Ding

    2017-01-01

    Analysis and recognition of driving styles are profoundly important to intelligent transportation and vehicle calibration. This paper presents a novel driving style analysis framework using the primitive driving patterns learned from naturalistic driving data. In order to achieve this, first, a Bayesian nonparametric learning method based on a hidden semi-Markov model (HSMM) is introduced to extract primitive driving patterns from time series driving data without prior knowledge of the number...

  17. The frontier beneath our feet

    Science.gov (United States)

    Grant, Gordon E.; Dietrich, William E.

    2017-04-01

    Following the simple question as to where water goes when it rains leads to one of the most exciting frontiers in earth science: the critical zone—Earth's dynamic skin. The critical zone extends from the top of the vegetation canopy through the soil and down to fresh bedrock and the bottom of the groundwater. Only recently recognized as a distinct zone, it is challenging to study because it is hard to observe directly, and varies widely across biogeoclimatic regions. Yet new ideas, instruments, and observations are revealing surprising and sometimes paradoxical insights, underscoring the value of field campaigns and long-term observatories. These insights bear directly on some of the most pressing societal problems today: maintaining healthy forests, sustaining streamflow during droughts, and restoring productive terrestrial and aquatic ecosystems. The critical zone is critical because it supports all terrestrial life; it is the nexus where water and carbon is cycled, vegetation (hence food) grows, soil develops, landscapes evolve, and we live. No other frontier is so close to home.

  18. Transition redshift: new constraints from parametric and nonparametric methods

    Energy Technology Data Exchange (ETDEWEB)

    Rani, Nisha; Mahajan, Shobhit; Mukherjee, Amitabha [Department of Physics and Astrophysics, University of Delhi, New Delhi 110007 (India); Jain, Deepak [Deen Dayal Upadhyaya College, University of Delhi, New Delhi 110015 (India); Pires, Nilza, E-mail: nrani@physics.du.ac.in, E-mail: djain@ddu.du.ac.in, E-mail: shobhit.mahajan@gmail.com, E-mail: amimukh@gmail.com, E-mail: npires@dfte.ufrn.br [Departamento de Física Teórica e Experimental, UFRN, Campus Universitário, Natal, RN 59072-970 (Brazil)

    2015-12-01

    In this paper, we use the cosmokinematics approach to study the accelerated expansion of the Universe. This is a model independent approach and depends only on the assumption that the Universe is homogeneous and isotropic and is described by the FRW metric. We parametrize the deceleration parameter, q(z), to constrain the transition redshift (z{sub t}) at which the expansion of the Universe goes from a decelerating to an accelerating phase. We use three different parametrizations of q(z) namely, q{sub I}(z)=q{sub 1}+q{sub 2}z, q{sub II} (z) = q{sub 3} + q{sub 4} ln (1 + z) and q{sub III} (z)=½+q{sub 5}/(1+z){sup 2}. A joint analysis of the age of galaxies, strong lensing and supernovae Ia data indicates that the transition redshift is less than unity i.e. z{sub t} < 1. We also use a nonparametric approach (LOESS+SIMEX) to constrain z{sub t}. This too gives z{sub t} < 1 which is consistent with the value obtained by the parametric approach.

  19. Nonparametric Bayes Classification and Hypothesis Testing on Manifolds

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David

    2012-01-01

    Our first focus is prediction of a categorical response variable using features that lie on a general manifold. For example, the manifold may correspond to the surface of a hypersphere. We propose a general kernel mixture model for the joint distribution of the response and predictors, with the kernel expressed in product form and dependence induced through the unknown mixing measure. We provide simple sufficient conditions for large support and weak and strong posterior consistency in estimating both the joint distribution of the response and predictors and the conditional distribution of the response. Focusing on a Dirichlet process prior for the mixing measure, these conditions hold using von Mises-Fisher kernels when the manifold is the unit hypersphere. In this case, Bayesian methods are developed for efficient posterior computation using slice sampling. Next we develop Bayesian nonparametric methods for testing whether there is a difference in distributions between groups of observations on the manifold having unknown densities. We prove consistency of the Bayes factor and develop efficient computational methods for its calculation. The proposed classification and testing methods are evaluated using simulation examples and applied to spherical data applications. PMID:22754028

  20. Estimating the shadow prices of SO2 and NOx for U.S. coal power plants: A convex nonparametric least squares approach

    International Nuclear Information System (INIS)

    Mekaroonreung, Maethee; Johnson, Andrew L.

    2012-01-01

    Weak disposability between outputs and pollutants, defined as a simultaneous proportional reduction of both outputs and pollutants, assumes that pollutants are byproducts of the output generation process and that a firm can “freely dispose” of both by scaling down production levels, leaving some inputs idle. Based on the production axioms of monotonicity, convexity and weak disposability, we formulate a convex nonparametric least squares (CNLS) quadratic optimization problem to estimate a frontier production function assuming either a deterministic disturbance term consisting only of inefficiency, or a composite disturbance term composed of both inefficiency and noise. The suggested methodology extends the stochastic semi-nonparametric envelopment of data (StoNED) described in Kuosmanen and Kortelainen (2011). Applying the method to estimate the shadow prices of SO 2 and NO x generated by U.S. coal power plants, we conclude that the weak disposability StoNED method provides more consistent estimates of market prices. - Highlights: ► Develops methodology to estimate shadow prices for SO 2 and NO x in the U.S. coal power plants. ► Extends CNLS and StoNED methods to include the weak disposability assumption. ► Estimates the range of SO 2 and NO x shadow prices as 201–343 $/ton and 409–1352 $/ton. ► StoNED method provides more accurate estimates of shadow prices than deterministic frontier.

  1. Prediction of A CRS Frontier Function and A Transformation Function for A CCR DEA Using EMBEDED PCA

    Directory of Open Access Journals (Sweden)

    Subhadip Sarkar

    2013-08-01

    Full Text Available Data Envelopment Analysis is a nonparametric tool for measuring the performance of a number of homogenous Decision Making Units. In this paper, Principal Component Analysis is used as an alternative tool to estimate the frontier in a Data Envelopment Analysis under the assumption of Constant Return to Scale. Apart from this, in the context of a multiple inputs and single output, a transformation function, is developed here using the Most Productive Scale Size condition stated by Starrett. This function complies with all postulates of a frontier function and is very similar to the formula given by Aigner and Chu. Moreover, it is capable of defining the threshold value for any resource.

  2. Intracluster light at the Frontier - II. The Frontier Fields Clusters

    Science.gov (United States)

    Montes, Mireia; Trujillo, Ignacio

    2018-02-01

    Multiwavelength deep observations are a key tool to understand the origin of the diffuse light in clusters of galaxies: the intracluster light (ICL). For this reason, we take advantage of the Hubble Frontier Fields (HFF) survey to investigate the properties of the stellar populations of the ICL of its six massive intermediate redshift (0.3 1015 M⊙) clusters is formed by the stripping of MW-like objects that have been accreted at z < 1, in agreement with current simulations. We do not find any significant increase in the fraction of light of the ICL with cosmic time, although the redshift range explored is narrow to derive any strong conclusion. When exploring the slope of the stellar mass density profile, we found that the ICL of the HFF clusters follows the shape of their underlying dark matter haloes, in agreement with the idea that the ICL is the result of the stripping of galaxies at recent times.

  3. Evolution of grid-wide access to database resident information in ATLAS using Frontier

    CERN Document Server

    Barberis, D; The ATLAS collaboration; de Stefano, J; Dewhurst, A L; Dykstra, D; Front, D

    2012-01-01

    The ATLAS experiment deployed Frontier technology world-wide during the the initial year of LHC collision data taking to enable user analysis jobs running on the World-wide LHC Computing Grid to access database resident data. Since that time, the deployment model has evolved to optimize resources, improve performance, and streamline maintenance of Frontier and related infrastructure. In this presentation we focus on the specific changes in the deployment and improvements undertaken such as the optimization of cache and launchpad location, the use of RPMs for more uniform deployment of underlying Frontier related components, improvements in monitoring, optimization of fail-over, and an increasing use of a centrally managed database containing site specific information (for configuration of services and monitoring). In addition, analysis of Frontier logs has allowed us a deeper understanding of problematic queries and understanding of use cases. Use of the system has grown beyond just user analysis and subsyste...

  4. Application of nonparametric statistic method for DNBR limit calculation

    International Nuclear Information System (INIS)

    Dong Bo; Kuang Bo; Zhu Xuenong

    2013-01-01

    Background: Nonparametric statistical method is a kind of statistical inference method not depending on a certain distribution; it calculates the tolerance limits under certain probability level and confidence through sampling methods. The DNBR margin is one important parameter of NPP design, which presents the safety level of NPP. Purpose and Methods: This paper uses nonparametric statistical method basing on Wilks formula and VIPER-01 subchannel analysis code to calculate the DNBR design limits (DL) of 300 MW NPP (Nuclear Power Plant) during the complete loss of flow accident, simultaneously compared with the DL of DNBR through means of ITDP to get certain DNBR margin. Results: The results indicate that this method can gain 2.96% DNBR margin more than that obtained by ITDP methodology. Conclusions: Because of the reduction of the conservation during analysis process, the nonparametric statistical method can provide greater DNBR margin and the increase of DNBR margin is benefited for the upgrading of core refuel scheme. (authors)

  5. Comparing parametric and nonparametric regression methods for panel data

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    We investigate and compare the suitability of parametric and non-parametric stochastic regression methods for analysing production technologies and the optimal firm size. Our theoretical analysis shows that the most commonly used functional forms in empirical production analysis, Cobb......-Douglas and Translog, are unsuitable for analysing the optimal firm size. We show that the Translog functional form implies an implausible linear relationship between the (logarithmic) firm size and the elasticity of scale, where the slope is artificially related to the substitutability between the inputs....... The practical applicability of the parametric and non-parametric regression methods is scrutinised and compared by an empirical example: we analyse the production technology and investigate the optimal size of Polish crop farms based on a firm-level balanced panel data set. A nonparametric specification test...

  6. Optimization and industry new frontiers

    CERN Document Server

    Korotkikh, Victor

    2003-01-01

    Optimization from Human Genes to Cutting Edge Technologies The challenges faced by industry today are so complex that they can only be solved through the help and participation of optimization ex­ perts. For example, many industries in e-commerce, finance, medicine, and engineering, face several computational challenges due to the mas­ sive data sets that arise in their applications. Some of the challenges include, extended memory algorithms and data structures, new program­ ming environments, software systems, cryptographic protocols, storage devices, data compression, mathematical and statistical methods for knowledge mining, and information visualization. With advances in computer and information systems technologies, and many interdisci­ plinary efforts, many of the "data avalanche challenges" are beginning to be addressed. Optimization is the most crucial component in these efforts. Nowadays, the main task of optimization is to investigate the cutting edge frontiers of these technologies and systems ...

  7. New frontiers for tomorrow`s world

    Energy Technology Data Exchange (ETDEWEB)

    Kassler, P [Shell International Petroleum Co. Ltd., London (United Kingdom)

    1994-12-31

    The conference paper deals with new frontiers and barricades in the global economic development and their influence on fuel consumption and energy source development. Topics discussed are incremental energy supply - new frontiers, world car population - new frontiers, OPEC crude production capacity vs call on OPEC, incremental world oil demand by region 1992-2000, oil resource cost curve, progress in seismic 1983-1991, Troll picture, cost reduction in renewables, sustained growth scenario, nuclear electricity capacity - France, OECD road transport fuels - barricades, and energy taxation. 18 figs.

  8. Nonparametric regression using the concept of minimum energy

    International Nuclear Information System (INIS)

    Williams, Mike

    2011-01-01

    It has recently been shown that an unbinned distance-based statistic, the energy, can be used to construct an extremely powerful nonparametric multivariate two sample goodness-of-fit test. An extension to this method that makes it possible to perform nonparametric regression using multiple multivariate data sets is presented in this paper. The technique, which is based on the concept of minimizing the energy of the system, permits determination of parameters of interest without the need for parametric expressions of the parent distributions of the data sets. The application and performance of this new method is discussed in the context of some simple example analyses.

  9. An Arbitrary Benchmark CAPM: One Additional Frontier Portfolio is Sufficient

    OpenAIRE

    Ekern, Steinar

    2008-01-01

    First draft: July 16, 2008 This version: October 7, 2008 The benchmark CAPM linearly relates the expected returns on an arbitrary asset, an arbitrary benchmark portfolio, and an arbitrary MV frontier portfolio. The benchmark is not required to be on the frontier and may be non-perfectly correlated with the frontier portfolio. The benchmark CAPM extends and generalizes previous CAPM formulations, including the zero beta, two correlated frontier portfolios, riskless augmented frontier, an...

  10. Geostatistical radar-raingauge combination with nonparametric correlograms: methodological considerations and application in Switzerland

    Science.gov (United States)

    Schiemann, R.; Erdin, R.; Willi, M.; Frei, C.; Berenguer, M.; Sempere-Torres, D.

    2011-05-01

    Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation

  11. Nonhomogeneous Poisson process with nonparametric frailty

    International Nuclear Information System (INIS)

    Slimacek, Vaclav; Lindqvist, Bo Henry

    2016-01-01

    The failure processes of heterogeneous repairable systems are often modeled by non-homogeneous Poisson processes. The common way to describe an unobserved heterogeneity between systems is to multiply the basic rate of occurrence of failures by a random variable (a so-called frailty) having a specified parametric distribution. Since the frailty is unobservable, the choice of its distribution is a problematic part of using these models, as are often the numerical computations needed in the estimation of these models. The main purpose of this paper is to develop a method for estimation of the parameters of a nonhomogeneous Poisson process with unobserved heterogeneity which does not require parametric assumptions about the heterogeneity and which avoids the frequently encountered numerical problems associated with the standard models for unobserved heterogeneity. The introduced method is illustrated on an example involving the power law process, and is compared to the standard gamma frailty model and to the classical model without unobserved heterogeneity. The derived results are confirmed in a simulation study which also reveals several not commonly known properties of the gamma frailty model and the classical model, and on a real life example. - Highlights: • A new method for estimation of a NHPP with frailty is introduced. • Introduced method does not require parametric assumptions about frailty. • The approach is illustrated on an example with the power law process. • The method is compared to the gamma frailty model and to the model without frailty.

  12. A nonparametric approach to forecasting realized volatility

    OpenAIRE

    Adam Clements; Ralf Becker

    2009-01-01

    A well developed literature exists in relation to modeling and forecasting asset return volatility. Much of this relate to the development of time series models of volatility. This paper proposes an alternative method for forecasting volatility that does not involve such a model. Under this approach a forecast is a weighted average of historical volatility. The greatest weight is given to periods that exhibit the most similar market conditions to the time at which the forecast is being formed...

  13. Frontiers of quantum Monte Carlo workshop: preface

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  14. Applied thermodynamics: A new frontier for biotechnology

    DEFF Research Database (Denmark)

    Mollerup, Jørgen

    2006-01-01

    The scientific career of one of the most outstanding scientists in molecular thermodynamics, Professor John M. Prausnitz at Berkeley, reflects the change in the agenda of molecular thermodynamics, from hydrocarbon chemistry to biotechnology. To make thermodynamics a frontier for biotechnology...

  15. Frontier search to slow in Indonesia

    International Nuclear Information System (INIS)

    Land, R.

    1992-01-01

    This paper reports that oil and gas exploration in Indonesia likely will begin refocusing on proven areas after 1993, Arthur Andersen and Co.'s Far East oil analyst predicts. Arthur Andersen's James Sales the disappointing exploration results, outdated production sharing contract (PSC) terms, and low oil prices are discouraging companies from exploring frontier acreage. But Indonesian frontier activity during 1992-93 is likely to remain high because a large number of PSCs awarded in the past 2 years by state owned Pertamina cover high risk frontier areas. PSC contractors have disclosed several discoveries on recently awarded frontier tracts. However, the discoveries have been relatively small and far from pipeline infrastructure. With prevailing low oil prices, Pertamina likely will find it difficult to entice companies to extend PSCs or joint operating agreements beyond minimum exploration commitments

  16. Nucleon measurements at the precision frontier

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, Carl E. [Physics Department, College of William and Mary, Williamsburg, VA 23187 (United States)

    2013-11-07

    We comment on nucleon measurements at the precision frontier. As examples of what can be learned, we concentrate on three topics, which are parity violating scattering experiments, the proton radius puzzle, and the symbiosis between nuclear and atomic physics.

  17. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    Science.gov (United States)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  18. New Frontiers in Passive and Active Nanoantennas

    DEFF Research Database (Denmark)

    Arslanagic, Samel; Ziolkowski, Richard W.

    2017-01-01

    The articles included in this special section focus on several recent advances in the field of passive and active nanoantennas that employ not only traditional based realizations but also their new frontiers.......The articles included in this special section focus on several recent advances in the field of passive and active nanoantennas that employ not only traditional based realizations but also their new frontiers....

  19. Frontiers in Time Series and Financial Econometrics

    OpenAIRE

    Ling, S.; McAleer, M.J.; Tong, H.

    2015-01-01

    __Abstract__ Two of the fastest growing frontiers in econometrics and quantitative finance are time series and financial econometrics. Significant theoretical contributions to financial econometrics have been made by experts in statistics, econometrics, mathematics, and time series analysis. The purpose of this special issue of the journal on “Frontiers in Time Series and Financial Econometrics” is to highlight several areas of research by leading academics in which novel methods have contrib...

  20. Adaptive nonparametric Bayesian inference using location-scale mixture priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2010-01-01

    We study location-scale mixture priors for nonparametric statistical problems, including multivariate regression, density estimation and classification. We show that a rate-adaptive procedure can be obtained if the prior is properly constructed. In particular, we show that adaptation is achieved if

  1. Non-Parametric Analysis of Rating Transition and Default Data

    DEFF Research Database (Denmark)

    Fledelius, Peter; Lando, David; Perch Nielsen, Jens

    2004-01-01

    We demonstrate the use of non-parametric intensity estimation - including construction of pointwise confidence sets - for analyzing rating transition data. We find that transition intensities away from the class studied here for illustration strongly depend on the direction of the previous move b...

  2. On the robust nonparametric regression estimation for a functional regressor

    OpenAIRE

    Azzedine , Nadjia; Laksaci , Ali; Ould-Saïd , Elias

    2009-01-01

    On the robust nonparametric regression estimation for a functional regressor correspondance: Corresponding author. (Ould-Said, Elias) (Azzedine, Nadjia) (Laksaci, Ali) (Ould-Said, Elias) Departement de Mathematiques--> , Univ. Djillali Liabes--> , BP 89--> , 22000 Sidi Bel Abbes--> - ALGERIA (Azzedine, Nadjia) Departement de Mathema...

  3. A general approach to posterior contraction in nonparametric inverse problems

    NARCIS (Netherlands)

    Knapik, Bartek; Salomond, Jean Bernard

    In this paper, we propose a general method to derive an upper bound for the contraction rate of the posterior distribution for nonparametric inverse problems. We present a general theorem that allows us to derive contraction rates for the parameter of interest from contraction rates of the related

  4. Non-parametric analysis of production efficiency of poultry egg ...

    African Journals Online (AJOL)

    Non-parametric analysis of production efficiency of poultry egg farmers in Delta ... analysis of factors affecting the output of poultry farmers showed that stock ... should be put in place for farmers to learn the best farm practices carried out on the ...

  5. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.; Maity, A.; Mammen, E.; Yu, K.

    2009-01-01

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements

  6. Efficient nonparametric estimation of causal mediation effects

    OpenAIRE

    Chan, K. C. G.; Imai, K.; Yam, S. C. P.; Zhang, Z.

    2016-01-01

    An essential goal of program evaluation and scientific research is the investigation of causal mechanisms. Over the past several decades, causal mediation analysis has been used in medical and social sciences to decompose the treatment effect into the natural direct and indirect effects. However, all of the existing mediation analysis methods rely on parametric modeling assumptions in one way or another, typically requiring researchers to specify multiple regression models involving the treat...

  7. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    Science.gov (United States)

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  8. Short-term forecasting of meteorological time series using Nonparametric Functional Data Analysis (NPFDA)

    Science.gov (United States)

    Curceac, S.; Ternynck, C.; Ouarda, T.

    2015-12-01

    Over the past decades, a substantial amount of research has been conducted to model and forecast climatic variables. In this study, Nonparametric Functional Data Analysis (NPFDA) methods are applied to forecast air temperature and wind speed time series in Abu Dhabi, UAE. The dataset consists of hourly measurements recorded for a period of 29 years, 1982-2010. The novelty of the Functional Data Analysis approach is in expressing the data as curves. In the present work, the focus is on daily forecasting and the functional observations (curves) express the daily measurements of the above mentioned variables. We apply a non-linear regression model with a functional non-parametric kernel estimator. The computation of the estimator is performed using an asymmetrical quadratic kernel function for local weighting based on the bandwidth obtained by a cross validation procedure. The proximities between functional objects are calculated by families of semi-metrics based on derivatives and Functional Principal Component Analysis (FPCA). Additionally, functional conditional mode and functional conditional median estimators are applied and the advantages of combining their results are analysed. A different approach employs a SARIMA model selected according to the minimum Akaike (AIC) and Bayessian (BIC) Information Criteria and based on the residuals of the model. The performance of the models is assessed by calculating error indices such as the root mean square error (RMSE), relative RMSE, BIAS and relative BIAS. The results indicate that the NPFDA models provide more accurate forecasts than the SARIMA models. Key words: Nonparametric functional data analysis, SARIMA, time series forecast, air temperature, wind speed

  9. Estimating the NIH efficient frontier.

    Directory of Open Access Journals (Sweden)

    Dimitrios Bisias

    Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  10. Estimating the NIH efficient frontier.

    Science.gov (United States)

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of

  11. Methodology in robust and nonparametric statistics

    CERN Document Server

    Jurecková, Jana; Picek, Jan

    2012-01-01

    Introduction and SynopsisIntroductionSynopsisPreliminariesIntroductionInference in Linear ModelsRobustness ConceptsRobust and Minimax Estimation of LocationClippings from Probability and Asymptotic TheoryProblemsRobust Estimation of Location and RegressionIntroductionM-EstimatorsL-EstimatorsR-EstimatorsMinimum Distance and Pitman EstimatorsDifferentiable Statistical FunctionsProblemsAsymptotic Representations for L-Estimators

  12. Migrant decision-making in a frontier landscape

    Science.gov (United States)

    Salerno, Jonathan

    2016-04-01

    Across the tropics, rural farmers and livestock keepers use mobility as an adaptive livelihood strategy. Continued migration to and within frontier areas is widely viewed as a driver of environmental decline and biodiversity loss. Recent scholarship advances our understanding of migration decision-making in the context of changing climate and environments, and in doing so it highlights the variation in migration responses to primarily economic and environmental factors. Building on these insights, this letter investigates past and future migration decisions in a frontier landscape of Tanzania, East Africa. Combining field observations and household data within a multilevel modeling framework, the letter analyzes the explicit importance of social factors relative to economic and environmental factors in driving decisions to migrate or remain. Results indeed suggest that local community ties and non-local social networks drive both immobility and anticipated migration, respectively. In addition, positive interactions with local protected natural resource areas promote longer-term residence. Findings shed new light on how frontier areas transition to human dominated landscapes. This highlights critical links between migration behavior and the conservation of biodiversity and management of natural resources, as well as how migrants evolve to become integrated into communities.

  13. CATDAT - A program for parametric and nonparametric categorical data analysis user's manual, Version 1.0

    International Nuclear Information System (INIS)

    Peterson, James R.; Haas, Timothy C.; Lee, Danny C.

    2000-01-01

    Natural resource professionals are increasingly required to develop rigorous statistical models that relate environmental data to categorical responses data. Recent advances in the statistical and computing sciences have led to the development of sophisticated methods for parametric and nonparametric analysis of data with categorical responses. The statistical software package CATDAT was designed to make some of these relatively new and powerful techniques available to scientists. The CATDAT statistical package includes 4 analytical techniques: generalized logit modeling; binary classification tree; extended K-nearest neighbor classification; and modular neural network

  14. Statistical decisions under nonparametric a priori information

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1985-01-01

    The basic module of applied program package for statistical analysis of the ANI experiment data is described. By means of this module tasks of choosing theoretical model most adequately fitting to experimental data, selection of events of definte type, identification of elementary particles are carried out. For mentioned problems solving, the Bayesian rules, one-leave out test and KNN (K Nearest Neighbour) adaptive density estimation are utilized

  15. Metaorganisms as the new frontier.

    Science.gov (United States)

    Bosch, Thomas C G; McFall-Ngai, Margaret J

    2011-09-01

    Because it appears that almost all organisms are part of an interdependent metaorganism, an understanding of the underlying host-microbe species associations, and of evolution and molecular underpinnings, has become the new frontier in zoology. The availability of novel high-throughput sequencing methods, together with the conceptual understanding that advances mostly originate at the intersection of traditional disciplinary boundaries, enable biologists to dissect the mechanisms that control the interdependent associations of species. In this review article, we outline some of the issues in inter-species interactions, present two case studies illuminating the necessity of interfacial research when addressing complex and fundamental zoological problems, and show that an interdisciplinary approach that seeks to understand co-evolved multi-species relationships will connect genomes, phenotypes, ecosystems and the evolutionary forces that have shaped them. We hope that this article inspires other collaborations of a similar nature on the diverse landscape commonly referred to as "zoology". Copyright © 2011 Elsevier GmbH. All rights reserved.

  16. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi

    2017-04-12

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  17. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo

    2017-01-01

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  18. Nonparametric additive regression for repeatedly measured data

    KAUST Repository

    Carroll, R. J.

    2009-05-20

    We develop an easily computed smooth backfitting algorithm for additive model fitting in repeated measures problems. Our methodology easily copes with various settings, such as when some covariates are the same over repeated response measurements. We allow for a working covariance matrix for the regression errors, showing that our method is most efficient when the correct covariance matrix is used. The component functions achieve the known asymptotic variance lower bound for the scalar argument case. Smooth backfitting also leads directly to design-independent biases in the local linear case. Simulations show our estimator has smaller variance than the usual kernel estimator. This is also illustrated by an example from nutritional epidemiology. © 2009 Biometrika Trust.

  19. A GEOMETRICALLY SUPPORTED z ∼ 10 CANDIDATE MULTIPLY IMAGED BY THE HUBBLE FRONTIER FIELDS CLUSTER A2744

    International Nuclear Information System (INIS)

    Zitrin, Adi; Zheng, Wei; Huang, Xingxing; Ford, Holland; Broadhurst, Tom; Moustakas, John; Lam, Daniel; Lim, Jeremy; Shu, Xinwen; Diego, Jose M.; Bauer, Franz E.; Infante, Leopoldo; Kelson, Daniel D.; Molino, Alberto

    2014-01-01

    The deflection angles of lensed sources increase with their distance behind a given lens. We utilize this geometric effect to corroborate the z phot ≅ 9.8 photometric redshift estimate of a faint near-IR dropout, triply imaged by the massive galaxy cluster A2744 in deep Hubble Frontier Fields images. The multiple images of this source follow the same symmetry as other nearby sets of multiple images that bracket the critical curves and have well-defined redshifts (up to z spec ≅ 3.6), but with larger deflection angles, indicating that this source must lie at a higher redshift. Similarly, our different parametric and non-parametric lens models all require this object be at z ≳ 4, with at least 95% confidence, thoroughly excluding the possibility of lower-redshift interlopers. To study the properties of this source, we correct the two brighter images for their magnifications, leading to a star formation rate of ∼0.3 M ☉  yr –1 , a stellar mass of ∼4 × 10 7 M ☉ , and an age of ≲ 220 Myr (95% confidence). The intrinsic apparent magnitude is 29.9 AB (F160W), and the rest-frame UV (∼1500 Å) absolute magnitude is M UV, AB = –17.6. This corresponds to ∼0.1 L z=8 ∗ (∼0.2 L z=10 ∗ , adopting dM*/dz ∼ 0.45), making this candidate one of the least luminous galaxies discovered at z ∼ 10

  20. A GEOMETRICALLY SUPPORTED z ∼ 10 CANDIDATE MULTIPLY IMAGED BY THE HUBBLE FRONTIER FIELDS CLUSTER A2744

    Energy Technology Data Exchange (ETDEWEB)

    Zitrin, Adi [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Zheng, Wei; Huang, Xingxing; Ford, Holland [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Broadhurst, Tom [Department of Theoretical Physics, University of Basque Country UPV/EHU, Bilbao (Spain); Moustakas, John [Department of Physics and Astronomy, Siena College, Loudonville, NY 12211 (United States); Lam, Daniel; Lim, Jeremy [Department of Physics, The University of Hong Kong, Pokfulam Road (Hong Kong); Shu, Xinwen [CEA Saclay, DSM/Irfu/Service d' Astrophysique, Orme des Merisiers, F-91191 Gif-sur-Yvette Cedex (France); Diego, Jose M. [Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, E-39005 Santander (Spain); Bauer, Franz E.; Infante, Leopoldo [Pontificia Universidad Católica de Chile, Instituto de Astrofísica, Santiago 22 (Chile); Kelson, Daniel D. [The Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Molino, Alberto, E-mail: adizitrin@gmail.com [Instituto de Astrofísica de Andalucía - CSIC, Glorieta de la Astronomía, s/n. E-18008, Granada (Spain)

    2014-09-20

    The deflection angles of lensed sources increase with their distance behind a given lens. We utilize this geometric effect to corroborate the z {sub phot} ≅ 9.8 photometric redshift estimate of a faint near-IR dropout, triply imaged by the massive galaxy cluster A2744 in deep Hubble Frontier Fields images. The multiple images of this source follow the same symmetry as other nearby sets of multiple images that bracket the critical curves and have well-defined redshifts (up to z {sub spec} ≅ 3.6), but with larger deflection angles, indicating that this source must lie at a higher redshift. Similarly, our different parametric and non-parametric lens models all require this object be at z ≳ 4, with at least 95% confidence, thoroughly excluding the possibility of lower-redshift interlopers. To study the properties of this source, we correct the two brighter images for their magnifications, leading to a star formation rate of ∼0.3 M {sub ☉} yr{sup –1}, a stellar mass of ∼4 × 10{sup 7} M {sub ☉}, and an age of ≲ 220 Myr (95% confidence). The intrinsic apparent magnitude is 29.9 AB (F160W), and the rest-frame UV (∼1500 Å) absolute magnitude is M {sub UV,} {sub AB} = –17.6. This corresponds to ∼0.1 L{sub z=8}{sup ∗} (∼0.2 L{sub z=10}{sup ∗}, adopting dM*/dz ∼ 0.45), making this candidate one of the least luminous galaxies discovered at z ∼ 10.

  1. Nonparametric Identification of Glucose-Insulin Process in IDDM Patient with Multi-meal Disturbance

    Science.gov (United States)

    Bhattacharjee, A.; Sutradhar, A.

    2012-12-01

    Modern close loop control for blood glucose level in a diabetic patient necessarily uses an explicit model of the process. A fixed parameter full order or reduced order model does not characterize the inter-patient and intra-patient parameter variability. This paper deals with a frequency domain nonparametric identification of the nonlinear glucose-insulin process in an insulin dependent diabetes mellitus patient that captures the process dynamics in presence of uncertainties and parameter variations. An online frequency domain kernel estimation method has been proposed that uses the input-output data from the 19th order first principle model of the patient in intravenous route. Volterra equations up to second order kernels with extended input vector for a Hammerstein model are solved online by adaptive recursive least square (ARLS) algorithm. The frequency domain kernels are estimated using the harmonic excitation input data sequence from the virtual patient model. A short filter memory length of M = 2 was found sufficient to yield acceptable accuracy with lesser computation time. The nonparametric models are useful for closed loop control, where the frequency domain kernels can be directly used as the transfer function. The validation results show good fit both in frequency and time domain responses with nominal patient as well as with parameter variations.

  2. Conventional - Frontier and east coast supply

    International Nuclear Information System (INIS)

    Morrell, G.R.

    1998-01-01

    An assessment of frontier basins in Canada with proven potential for petroleum resources was provided. A prediction of which frontier basin will become a major supplier of conventional light oil was made by examining where companies are investing in frontier exploration today. Frontier land values for five active frontier areas were discussed. These included the Grand Banks of Newfoundland, Nova Scotia Offshore, Western Newfoundland, the southern Northwest Territories and the Central Mackenzie Valley. The focus of this presentation was on three of these regions which are actually producing: Newfoundland's Grand Banks, offshore Nova Scotia and the Mackenzie Valley. Activities in each of these areas were reviewed. The Canada-Newfoundland Offshore Petroleum Board has listed Hibernia's reserves at 666 million barrels. The Sable Offshore Energy Project on the continental shelf offshore Nova Scotia proposes to develop 5.4 tcf of gas plus 75 million barrels of NGLs over a project life of 14 years. In the Mackenzie Valley there are at least three petroleum systems, including the 235 million barrel pool at Norman Wells. 8 refs., 1 tab., 3 figs

  3. Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2017-06-01

    Full Text Available We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, and each dictionary is used for one cluster’s sparse coding. In this way, the signals in one cluster could be well represented by their corresponding dictionaries. A nonparametric Bayesian dictionary learning method is used to learn the dictionaries, which naturally infers an appropriate dictionary size for each cluster. A uniform quantizer and an adaptive arithmetic coding algorithm are adopted to code the sparse coefficients. With comparisons to other state-of-the art approaches, the effectiveness of the proposed method could be validated in the experiments.

  4. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...

  5. Exact nonparametric inference for detection of nonlinear determinism

    OpenAIRE

    Luo, Xiaodong; Zhang, Jie; Small, Michael; Moroz, Irene

    2005-01-01

    We propose an exact nonparametric inference scheme for the detection of nonlinear determinism. The essential fact utilized in our scheme is that, for a linear stochastic process with jointly symmetric innovations, its ordinary least square (OLS) linear prediction error is symmetric about zero. Based on this viewpoint, a class of linear signed rank statistics, e.g. the Wilcoxon signed rank statistic, can be derived with the known null distributions from the prediction error. Thus one of the ad...

  6. Comparative Study of Parametric and Non-parametric Approaches in Fault Detection and Isolation

    DEFF Research Database (Denmark)

    Katebi, S.D.; Blanke, M.; Katebi, M.R.

    This report describes a comparative study between two approaches to fault detection and isolation in dynamic systems. The first approach uses a parametric model of the system. The main components of such techniques are residual and signature generation for processing and analyzing. The second...... approach is non-parametric in the sense that the signature analysis is only dependent on the frequency or time domain information extracted directly from the input-output signals. Based on these approaches, two different fault monitoring schemes are developed where the feature extraction and fault decision...

  7. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Fontan, J.L.; Costa, J.; Ruso, J.M.; Prieto, G. [Dept. of Applied Physics, Univ. of Santiago de Compostela, Santiago de Compostela (Spain); Sarmiento, F. [Dept. of Mathematics, Faculty of Informatics, Univ. of A Coruna, A Coruna (Spain)

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found. (orig.)

  8. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds.

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David B

    2012-08-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels.

  9. The geometry of distributional preferences and a non-parametric identification approach: The Equality Equivalence Test.

    Science.gov (United States)

    Kerschbamer, Rudolf

    2015-05-01

    This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.

  10. Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.

    Science.gov (United States)

    Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben

    2017-06-06

    Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.

  11. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  12. Frontiers, territoriality and tensions in bordering spaces

    Directory of Open Access Journals (Sweden)

    María Eugenia Comerci

    2012-01-01

    Full Text Available The expansión of the agricultural frontier in the Argentine pampas implied a re-valuation of "bordering" spaces, which were considered "marginal" by capital. This paper aims at interpreting the socio-territorial impact -from both a material and a symbolic level- being caused by the expansión of the productive, business-profile [agricultural and oil] frontier in the center-west of the province of La Pampa. With the interpretative approach provided by qualitative methodologies, we intend to analyze -in a case study- how these frontier expansión processes altered and re-defined the social arena between the years 2000 and 2010, the social construction of the space and the power relations in Chos Malal

  13. MEASURING DARK MATTER PROFILES NON-PARAMETRICALLY IN DWARF SPHEROIDALS: AN APPLICATION TO DRACO

    International Nuclear Information System (INIS)

    Jardel, John R.; Gebhardt, Karl; Fabricius, Maximilian H.; Williams, Michael J.; Drory, Niv

    2013-01-01

    We introduce a novel implementation of orbit-based (or Schwarzschild) modeling that allows dark matter density profiles to be calculated non-parametrically in nearby galaxies. Our models require no assumptions to be made about velocity anisotropy or the dark matter profile. The technique can be applied to any dispersion-supported stellar system, and we demonstrate its use by studying the Local Group dwarf spheroidal galaxy (dSph) Draco. We use existing kinematic data at larger radii and also present 12 new radial velocities within the central 13 pc obtained with the VIRUS-W integral field spectrograph on the 2.7 m telescope at McDonald Observatory. Our non-parametric Schwarzschild models find strong evidence that the dark matter profile in Draco is cuspy for 20 ≤ r ≤ 700 pc. The profile for r ≥ 20 pc is well fit by a power law with slope α = –1.0 ± 0.2, consistent with predictions from cold dark matter simulations. Our models confirm that, despite its low baryon content relative to other dSphs, Draco lives in a massive halo.

  14. Nonparametric estimation for censored mixture data with application to the Cooperative Huntington's Observational Research Trial.

    Science.gov (United States)

    Wang, Yuanjia; Garcia, Tanya P; Ma, Yanyuan

    2012-01-01

    This work presents methods for estimating genotype-specific distributions from genetic epidemiology studies where the event times are subject to right censoring, the genotypes are not directly observed, and the data arise from a mixture of scientifically meaningful subpopulations. Examples of such studies include kin-cohort studies and quantitative trait locus (QTL) studies. Current methods for analyzing censored mixture data include two types of nonparametric maximum likelihood estimators (NPMLEs) which do not make parametric assumptions on the genotype-specific density functions. Although both NPMLEs are commonly used, we show that one is inefficient and the other inconsistent. To overcome these deficiencies, we propose three classes of consistent nonparametric estimators which do not assume parametric density models and are easy to implement. They are based on the inverse probability weighting (IPW), augmented IPW (AIPW), and nonparametric imputation (IMP). The AIPW achieves the efficiency bound without additional modeling assumptions. Extensive simulation experiments demonstrate satisfactory performance of these estimators even when the data are heavily censored. We apply these estimators to the Cooperative Huntington's Observational Research Trial (COHORT), and provide age-specific estimates of the effect of mutation in the Huntington gene on mortality using a sample of family members. The close approximation of the estimated non-carrier survival rates to that of the U.S. population indicates small ascertainment bias in the COHORT family sample. Our analyses underscore an elevated risk of death in Huntington gene mutation carriers compared to non-carriers for a wide age range, and suggest that the mutation equally affects survival rates in both genders. The estimated survival rates are useful in genetic counseling for providing guidelines on interpreting the risk of death associated with a positive genetic testing, and in facilitating future subjects at risk

  15. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  16. AHP 21: Review: China's Last Imperial Frontier and The Sichuan Frontier and Tibet

    Directory of Open Access Journals (Sweden)

    Robert Entenmann

    2012-12-01

    Full Text Available Until recently, historians have not paid much attention to Qing China's Tibetan frontier, but two excellent new studies address this neglect. One is Yingcong Dai's The Sichuan Frontier and Tibet: Imperial Strategy in the Early Qing, which examines the Qing conquest of the Khams region up to the end of the eighteenth century and its effect on Sichuan. The other is China's Last Imperial Frontier: Late Qing Expansion in Sichuan's Tibetan Borderlands by Xiuyu Wang, in which he describes Qing efforts to impose direct administration on the Khams region in the last years of the dynasty. ...

  17. Strategic Military Colonisation: The Cape Eastern Frontier 1806–1872

    African Journals Online (AJOL)

    The Cape Eastern Frontier of South Africa offers a fascinating insight into British military strategy as well as colonial development. The Eastern Frontier was for over 100 years a very turbulent frontier. It was the area where the four main population groups (the Dutch, the British, the Xhosa and the Khoikhoi) met, and in many ...

  18. Estimating the NIH Efficient Frontier

    Science.gov (United States)

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  19. Bayesian Nonparametric Regression Analysis of Data with Random Effects Covariates from Longitudinal Measurements

    KAUST Repository

    Ryu, Duchwan

    2010-09-28

    We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.

  20. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    Science.gov (United States)

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  1. Comparative analysis of automotive paints by laser induced breakdown spectroscopy and nonparametric permutation tests

    International Nuclear Information System (INIS)

    McIntee, Erin; Viglino, Emilie; Rinke, Caitlin; Kumor, Stephanie; Ni Liqiang; Sigman, Michael E.

    2010-01-01

    Laser-induced breakdown spectroscopy (LIBS) has been investigated for the discrimination of automobile paint samples. Paint samples from automobiles of different makes, models, and years were collected and separated into sets based on the color, presence or absence of effect pigments and the number of paint layers. Twelve LIBS spectra were obtained for each paint sample, each an average of a five single shot 'drill down' spectra from consecutive laser ablations in the same spot on the sample. Analyses by a nonparametric permutation test and a parametric Wald test were performed to determine the extent of discrimination within each set of paint samples. The discrimination power and Type I error were assessed for each data analysis method. Conversion of the spectral intensity to a log-scale (base 10) resulted in a higher overall discrimination power while observing the same significance level. Working on the log-scale, the nonparametric permutation tests gave an overall 89.83% discrimination power with a size of Type I error being 4.44% at the nominal significance level of 5%. White paint samples, as a group, were the most difficult to differentiate with the power being only 86.56% followed by 95.83% for black paint samples. Parametric analysis of the data set produced lower discrimination (85.17%) with 3.33% Type I errors, which is not recommended for both theoretical and practical considerations. The nonparametric testing method is applicable across many analytical comparisons, with the specific application described here being the pairwise comparison of automotive paint samples.

  2. New insights into the stochastic ray production frontier

    DEFF Research Database (Denmark)

    Henningsen, Arne; Bělín, Matěj; Henningsen, Géraldine

    The stochastic ray production frontier was developed as an alternative to the traditional output distance function to model production processes with multiple inputs and multiple outputs. Its main advantage over the traditional approach is that it can be used when some output quantities of some o...... important than the existing criticisms: taking logarithms of the polar coordinate angles, non-invariance to units of measurement, and ordering of the outputs. We also give some practical advice on how to address the newly raised issues....

  3. New insights into the stochastic ray production frontier

    DEFF Research Database (Denmark)

    Henningsen, Arne; Bělín, Matěj; Henningsen, Geraldine

    2017-01-01

    The stochastic ray production frontier was developed as an alternative to the traditional output distance function to model production processes with multiple inputs and multiple outputs. Its main advantage over the traditional approach is that it can be used when some output quantities of some o...... important than the existing criticisms: taking logarithms of the polar coordinate angles, non-invariance to units of measurement, and ordering of the outputs. We also give some practical advice on how to address the newly raised issues....

  4. Snowmass Computing Frontier: Computing for the Cosmic Frontier, Astrophysics, and Cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Connolly, A. [Univ. of Washington, Seattle, WA (United States); Habib, S. [Argonne National Lab. (ANL), Lemont, IL (United States); Szalay, A. [Johns Hopkins Univ., Baltimore, MD (United States); Borrill, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fuller, G. [Univ. of California, San Diego, CA (United States); Gnedin, N. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Heitmann, K. [Argonne National Lab. (ANL), Lemont, IL (United States); Jacobs, D. [Arizona State Univ., Tempe, AZ (United States); Lamb, D. [Univ. of Chicago, IL (United States); Mezzacappa, T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Messer, B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Myers, S. [National Radio Astronomy Observatory, Socorro, NM (United States); Nord, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Nugent, P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); O' Shea, B. [Michigan State Univ., East Lansing, MI (United States); Ricker, P. [Univ. of Illinois, Urbana-Champaign, IL (United States); Schneider, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-12

    This document presents (off-line) computing requrements and challenges for Cosmic Frontier science, covering the areas of data management, analysis, and simulations. We invite contributions to extend the range of covered topics and to enhance the current descriptions.

  5. Resources for Teaching about the Frontier.

    Science.gov (United States)

    Seiter, David

    1988-01-01

    Highlights materials relating to the U.S. frontier during the 19th century by citing journal articles and documents related to this topic. Indicates the means for obtaining these works which deal with rural schooling, historical demography, Native Americans, music, revivalism, and Black cowboys. (KO)

  6. THE INVISIBLE FRONTIER: THE CURRENT LIMITS OF ...

    African Journals Online (AJOL)

    tions, area-wide or regional development organizations, specialized functional authorities or ... regional or functional development authorities, parastatal organizations, or special project ..... frontiers between urban, peri-urban and rural activity are blurring and merging. .... KOPP, A., 1998. Networking and rural development.

  7. Risk appetite : reaching for the frontier

    NARCIS (Netherlands)

    dr. A.F. de Wild

    2015-01-01

    Deze poster vat de resultaten samen van een onderzoek in spelvorm gehouden onder 56 Nederlandse risicomanagement professionals. Onderzocht werd of zij in staat waren risico’s optimaal te managen. In het spel konden 10 optimale spelstrategieën gespeeld worden die samen een efficient frontier vormden.

  8. Nonlinear science as a fluctuating research frontier

    International Nuclear Information System (INIS)

    He Jihuan

    2009-01-01

    Nonlinear science has had quite a triumph in all conceivable applications in science and technology, especially in high energy physics and nanotechnology. COBE, which was awarded the physics Nobel Prize in 2006, might be probably more related to nonlinear science than the Big Bang theory. Five categories of nonlinear subjects in research frontier are pointed out.

  9. kruX: matrix-based non-parametric eQTL discovery.

    Science.gov (United States)

    Qi, Jianlong; Asl, Hassan Foroughi; Björkegren, Johan; Michoel, Tom

    2014-01-14

    The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.

  10. Bayesian Nonparametric Measurement of Factor Betas and Clustering with Application to Hedge Fund Returns

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2016-03-01

    Full Text Available We define a dynamic and self-adjusting mixture of Gaussian Graphical Models to cluster financial returns, and provide a new method for extraction of nonparametric estimates of dynamic alphas (excess return and betas (to a choice set of explanatory factors in a multivariate setting. This approach, as well as the outputs, has a dynamic, nonstationary and nonparametric form, which circumvents the problem of model risk and parametric assumptions that the Kalman filter and other widely used approaches rely on. The by-product of clusters, used for shrinkage and information borrowing, can be of use to determine relationships around specific events. This approach exhibits a smaller Root Mean Squared Error than traditionally used benchmarks in financial settings, which we illustrate through simulation. As an illustration, we use hedge fund index data, and find that our estimated alphas are, on average, 0.13% per month higher (1.6% per year than alphas estimated through Ordinary Least Squares. The approach exhibits fast adaptation to abrupt changes in the parameters, as seen in our estimated alphas and betas, which exhibit high volatility, especially in periods which can be identified as times of stressful market events, a reflection of the dynamic positioning of hedge fund portfolio managers.

  11. Impulse response identification with deterministic inputs using non-parametric methods

    International Nuclear Information System (INIS)

    Bhargava, U.K.; Kashyap, R.L.; Goodman, D.M.

    1985-01-01

    This paper addresses the problem of impulse response identification using non-parametric methods. Although the techniques developed herein apply to the truncated, untruncated, and the circulant models, we focus on the truncated model which is useful in certain applications. Two methods of impulse response identification will be presented. The first is based on the minimization of the C/sub L/ Statistic, which is an estimate of the mean-square prediction error; the second is a Bayesian approach. For both of these methods, we consider the effects of using both the identity matrix and the Laplacian matrix as weights on the energy in the impulse response. In addition, we present a method for estimating the effective length of the impulse response. Estimating the length is particularly important in the truncated case. Finally, we develop a method for estimating the noise variance at the output. Often, prior information on the noise variance is not available, and a good estimate is crucial to the success of estimating the impulse response with a nonparametric technique

  12. Spin-dependent potentials, axion-like particles and Lorentz-symmetry violation. Beyond the Standard Model phenomenology at the low-energy frontier of physics

    Energy Technology Data Exchange (ETDEWEB)

    Cavalcanti Malta, Pedro

    2017-06-27

    It is well known that the Standard Model is not complete and many of the theories that seek to extend it predict new phenomena that may be accessible in low-energy settings. This thesis deals with some of these, namely, novel spin-dependent interparticle potentials, axion-like particles and Lorentz-symmetry violation. In Part I we discuss the spin-dependent potentials that arise due to the exchange of a topologically massive mediator, and also pursue a comparative study between spin-1/2 and spin-1 sources. In Part II we treat massive axion-like particles that may be copiously produced in core-collapse supernovae, thus leading to a non-standard flux of gamma rays. Using SN 1987A and the fact that after its observation no extra gamma-ray signal was detected, we are able to set robust limits on the parameter space of axion-like particles with masses in the 10 keV - 100 MeV range. Finally, in Part III we investigate the effects of Lorentz-breaking backgrounds in QED. We discuss two scenarios: a modification in the Maxwell sector via the Carroll-Field-Jackiw term and a new non-minimal coupling between electrons and photons. We are able to set upper limits on the coefficients of the backgrounds by using laboratory-based measurements.

  13. A Bayesian Nonparametric Causal Model for Regression Discontinuity Designs

    Science.gov (United States)

    Karabatsos, George; Walker, Stephen G.

    2013-01-01

    The regression discontinuity (RD) design (Thistlewaite & Campbell, 1960; Cook, 2008) provides a framework to identify and estimate causal effects from a non-randomized design. Each subject of a RD design is assigned to the treatment (versus assignment to a non-treatment) whenever her/his observed value of the assignment variable equals or…

  14. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis

    of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed...... to improve on the finite-sample approximations. The finite sample properties of the estimators are examined in a simulation study....

  15. Multilevel Latent Class Analysis: Parametric and Nonparametric Models

    Science.gov (United States)

    Finch, W. Holmes; French, Brian F.

    2014-01-01

    Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…

  16. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.; Cañ ette, Isabel

    2011-01-01

    to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article

  17. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify the functional form of the production function. Most often, the Cobb...... results—including measures that are of interest of applied economists, such as elasticities. Therefore, we propose to use nonparametric econometric methods. First, they can be applied to verify the functional form used in parametric estimations of production functions. Second, they can be directly used...

  18. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    International Nuclear Information System (INIS)

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  19. Categorical and nonparametric data analysis choosing the best statistical technique

    CERN Document Server

    Nussbaum, E Michael

    2014-01-01

    Featuring in-depth coverage of categorical and nonparametric statistics, this book provides a conceptual framework for choosing the most appropriate type of test in various research scenarios. Class tested at the University of Nevada, the book's clear explanations of the underlying assumptions, computer simulations, and Exploring the Concept boxes help reduce reader anxiety. Problems inspired by actual studies provide meaningful illustrations of the techniques. The underlying assumptions of each test and the factors that impact validity and statistical power are reviewed so readers can explain

  20. Nonparametric statistics a step-by-step approach

    CERN Document Server

    Corder, Gregory W

    2014-01-01

    "…a very useful resource for courses in nonparametric statistics in which the emphasis is on applications rather than on theory.  It also deserves a place in libraries of all institutions where introductory statistics courses are taught."" -CHOICE This Second Edition presents a practical and understandable approach that enhances and expands the statistical toolset for readers. This book includes: New coverage of the sign test and the Kolmogorov-Smirnov two-sample test in an effort to offer a logical and natural progression to statistical powerSPSS® (Version 21) software and updated screen ca

  1. Evaluation of Nonparametric Probabilistic Forecasts of Wind Power

    DEFF Research Database (Denmark)

    Pinson, Pierre; Møller, Jan Kloppenborg; Nielsen, Henrik Aalborg, orlov 31.07.2008

    Predictions of wind power production for horizons up to 48-72 hour ahead comprise a highly valuable input to the methods for the daily management or trading of wind generation. Today, users of wind power predictions are not only provided with point predictions, which are estimates of the most...... likely outcome for each look-ahead time, but also with uncertainty estimates given by probabilistic forecasts. In order to avoid assumptions on the shape of predictive distributions, these probabilistic predictions are produced from nonparametric methods, and then take the form of a single or a set...

  2. Frontiere pericolose: l’adattamento Dangerous frontiers: adaptation

    Directory of Open Access Journals (Sweden)

    Sandro Volpe

    2011-04-01

    >good adaptation – must push further, dare in diversity. Departing from a text means setting off, travelling, crossing a frontier to land elsewhere. And sometimes, as Truffaut wished, doing «something different, and better».

  3. Facile Preparation of g-C3N4 Modified BiOCl Hybrid Photocatalyst and Vital Role of Frontier Orbital Energy Levels of Model Compounds in Photoactivity Enhancement

    Czech Academy of Sciences Publication Activity Database

    Shi, S.; Gondal, M.A.; Al-Saadi, A.A.; Fajgar, Radek; Kupčík, Jaroslav; Chang, X.; Shen, K.; Xu, Q.; Seddigi, Z.S.

    2014-01-01

    Roč. 416, FEB 15 (2014), s. 212-219 ISSN 0021-9797 Grant - others:KFUPM(CN) MIT11109; KFUPM(CN) MIT11110; NNSFCH(CN) 51172044; NSFJP(CN) BK2011617 Institutional support: RVO:67985858 Keywords : g-C3N4 * BiOCl * hybrid photocatalyst * gaussian 03 program * frontier orbital energy Subject RIV: CH - Nuclear ; Quantum Chemistry Impact factor: 3.368, year: 2014

  4. Progress & Frontiers in PV Performance

    Energy Technology Data Exchange (ETDEWEB)

    Deline, Chris; DiOrio, Nick; Jordan, Dirk; Toor, Fatima

    2016-09-12

    PowerPoint slides for a presentation given at Solar Power International 2016. Presentation includes System Advisor Model (SAM) introduction and battery modeling, bifacial PV modules and modeling, shade modeling and module level power electronics (MLPE), degradation rates, and PVWatts updates and validation.

  5. An improved nonparametric lower bound of species richness via a modified good-turing frequency formula.

    Science.gov (United States)

    Chiu, Chun-Huo; Wang, Yi-Ting; Walther, Bruno A; Chao, Anne

    2014-09-01

    It is difficult to accurately estimate species richness if there are many almost undetectable species in a hyper-diverse community. Practically, an accurate lower bound for species richness is preferable to an inaccurate point estimator. The traditional nonparametric lower bound developed by Chao (1984, Scandinavian Journal of Statistics 11, 265-270) for individual-based abundance data uses only the information on the rarest species (the numbers of singletons and doubletons) to estimate the number of undetected species in samples. Applying a modified Good-Turing frequency formula, we derive an approximate formula for the first-order bias of this traditional lower bound. The approximate bias is estimated by using additional information (namely, the numbers of tripletons and quadrupletons). This approximate bias can be corrected, and an improved lower bound is thus obtained. The proposed lower bound is nonparametric in the sense that it is universally valid for any species abundance distribution. A similar type of improved lower bound can be derived for incidence data. We test our proposed lower bounds on simulated data sets generated from various species abundance models. Simulation results show that the proposed lower bounds always reduce bias over the traditional lower bounds and improve accuracy (as measured by mean squared error) when the heterogeneity of species abundances is relatively high. We also apply the proposed new lower bounds to real data for illustration and for comparisons with previously developed estimators. © 2014, The International Biometric Society.

  6. Parametric and nonparametric Granger causality testing: Linkages between international stock markets

    Science.gov (United States)

    De Gooijer, Jan G.; Sivarajasingham, Selliah

    2008-04-01

    This study investigates long-term linear and nonlinear causal linkages among eleven stock markets, six industrialized markets and five emerging markets of South-East Asia. We cover the period 1987-2006, taking into account the on-set of the Asian financial crisis of 1997. We first apply a test for the presence of general nonlinearity in vector time series. Substantial differences exist between the pre- and post-crisis period in terms of the total number of significant nonlinear relationships. We then examine both periods, using a new nonparametric test for Granger noncausality and the conventional parametric Granger noncausality test. One major finding is that the Asian stock markets have become more internationally integrated after the Asian financial crisis. An exception is the Sri Lankan market with almost no significant long-term linear and nonlinear causal linkages with other markets. To ensure that any causality is strictly nonlinear in nature, we also examine the nonlinear causal relationships of VAR filtered residuals and VAR filtered squared residuals for the post-crisis sample. We find quite a few remaining significant bi- and uni-directional causal nonlinear relationships in these series. Finally, after filtering the VAR-residuals with GARCH-BEKK models, we show that the nonparametric test statistics are substantially smaller in both magnitude and statistical significance than those before filtering. This indicates that nonlinear causality can, to a large extent, be explained by simple volatility effects.

  7. Bayesian nonparametric inference on quantile residual life function: Application to breast cancer data.

    Science.gov (United States)

    Park, Taeyoung; Jeong, Jong-Hyeon; Lee, Jae Won

    2012-08-15

    There is often an interest in estimating a residual life function as a summary measure of survival data. For ease in presentation of the potential therapeutic effect of a new drug, investigators may summarize survival data in terms of the remaining life years of patients. Under heavy right censoring, however, some reasonably high quantiles (e.g., median) of a residual lifetime distribution cannot be always estimated via a popular nonparametric approach on the basis of the Kaplan-Meier estimator. To overcome the difficulties in dealing with heavily censored survival data, this paper develops a Bayesian nonparametric approach that takes advantage of a fully model-based but highly flexible probabilistic framework. We use a Dirichlet process mixture of Weibull distributions to avoid strong parametric assumptions on the unknown failure time distribution, making it possible to estimate any quantile residual life function under heavy censoring. Posterior computation through Markov chain Monte Carlo is straightforward and efficient because of conjugacy properties and partial collapse. We illustrate the proposed methods by using both simulated data and heavily censored survival data from a recent breast cancer clinical trial conducted by the National Surgical Adjuvant Breast and Bowel Project. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  9. Triangles in ROC space: History and theory of "nonparametric" measures of sensitivity and response bias.

    Science.gov (United States)

    Macmillan, N A; Creelman, C D

    1996-06-01

    Can accuracy and response bias in two-stimulus, two-response recognition or detection experiments be measured nonparametrically? Pollack and Norman (1964) answered this question affirmatively for sensitivity, Hodos (1970) for bias: Both proposed measures based on triangular areas in receiver-operating characteristic space. Their papers, and especially a paper by Grier (1971) that provided computing formulas for the measures, continue to be heavily cited in a wide range of content areas. In our sample of articles, most authors described triangle-based measures as making fewer assumptions than measures associated with detection theory. However, we show that statistics based on products or ratios of right triangle areas, including a recently proposed bias index and a not-yetproposed but apparently plausible sensitivity index, are consistent with a decision process based on logistic distributions. Even the Pollack and Norman measure, which is based on non-right triangles, is approximately logistic for low values of sensitivity. Simple geometric models for sensitivity and bias are not nonparametric, even if their implications are not acknowledged in the defining publications.

  10. 1st Conference of the International Society for Nonparametric Statistics

    CERN Document Server

    Lahiri, S; Politis, Dimitris

    2014-01-01

    This volume is composed of peer-reviewed papers that have developed from the First Conference of the International Society for NonParametric Statistics (ISNPS). This inaugural conference took place in Chalkidiki, Greece, June 15-19, 2012. It was organized with the co-sponsorship of the IMS, the ISI, and other organizations. M.G. Akritas, S.N. Lahiri, and D.N. Politis are the first executive committee members of ISNPS, and the editors of this volume. ISNPS has a distinguished Advisory Committee that includes Professors R.Beran, P.Bickel, R. Carroll, D. Cook, P. Hall, R. Johnson, B. Lindsay, E. Parzen, P. Robinson, M. Rosenblatt, G. Roussas, T. SubbaRao, and G. Wahba. The Charting Committee of ISNPS consists of more than 50 prominent researchers from all over the world.   The chapters in this volume bring forth recent advances and trends in several areas of nonparametric statistics. In this way, the volume facilitates the exchange of research ideas, promotes collaboration among researchers from all over the wo...

  11. Nonparametric Analyses of Log-Periodic Precursors to Financial Crashes

    Science.gov (United States)

    Zhou, Wei-Xing; Sornette, Didier

    We apply two nonparametric methods to further test the hypothesis that log-periodicity characterizes the detrended price trajectory of large financial indices prior to financial crashes or strong corrections. The term "parametric" refers here to the use of the log-periodic power law formula to fit the data; in contrast, "nonparametric" refers to the use of general tools such as Fourier transform, and in the present case the Hilbert transform and the so-called (H, q)-analysis. The analysis using the (H, q)-derivative is applied to seven time series ending with the October 1987 crash, the October 1997 correction and the April 2000 crash of the Dow Jones Industrial Average (DJIA), the Standard & Poor 500 and Nasdaq indices. The Hilbert transform is applied to two detrended price time series in terms of the ln(tc-t) variable, where tc is the time of the crash. Taking all results together, we find strong evidence for a universal fundamental log-frequency f=1.02±0.05 corresponding to the scaling ratio λ=2.67±0.12. These values are in very good agreement with those obtained in earlier works with different parametric techniques. This note is extracted from a long unpublished report with 58 figures available at , which extensively describes the evidence we have accumulated on these seven time series, in particular by presenting all relevant details so that the reader can judge for himself or herself the validity and robustness of the results.

  12. Application of nonparametric statistics to material strength/reliability assessment

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-01-01

    An advanced material technology requires data base on a wide variety of material behavior which need to be established experimentally. It may often happen that experiments are practically limited in terms of reproducibility or a range of test parameters. Statistical methods can be applied to understanding uncertainties in such a quantitative manner as required from the reliability point of view. Statistical assessment involves determinations of a most probable value and the maximum and/or minimum value as one-sided or two-sided confidence limit. A scatter of test data can be approximated by a theoretical distribution only if the goodness of fit satisfies a test criterion. Alternatively, nonparametric statistics (NPS) or distribution-free statistics can be applied. Mathematical procedures by NPS are well established for dealing with most reliability problems. They handle only order statistics of a sample. Mathematical formulas and some applications to engineering assessments are described. They include confidence limits of median, population coverage of sample, required minimum number of a sample, and confidence limits of fracture probability. These applications demonstrate that a nonparametric statistical estimation is useful in logical decision making in the case a large uncertainty exists. (author)

  13. Urban frontiers in the global struggle for capital gains

    Directory of Open Access Journals (Sweden)

    Peter Mörtenböck

    2018-05-01

    Full Text Available This article examines different ways in which finance models have become the ruling mode of spatializing relationships, arguing that the ongoing convergence of economic and spatial investment has transformed our environments into heavily contested ‘financescapes’. First, it reflects upon architecture’s capacity to give both material and symbolic form to these processes and considers the impacts this has on the emergence of novel kinds of urban investment frontiers, including luxury brand real estate, free zones, private cities, and urban innovation hubs. Focusing on speculative urban developments in Morocco and the United Arab Emirates, the article then highlights the performative dimension of such building programs: how architectural capital is put to work by actively performing the frontiers of future development. Physically staking out future financial gains, this mode of operation is today becoming increasingly manifested in urban crowdfunding schemes. We argue that, far from promoting new models of civic participation, such schemes are functioning as a testbed for speculation around new patterns of spatial production in which architecture acts less as the flagstaff of capital than as a capital system in itself.

  14. Frontier Fields: Engaging Educators, the Youth, and the Public in Exploring the Cosmic Frontier

    Science.gov (United States)

    Lawton, Brandon L.; Eisenhamer, Bonnie; Smith, Denise A.; Summers, Frank; Darnell, John A.; Ryer, Holly

    2015-01-01

    The Frontier Fields is a multi-cycle program of six deep-field observations of strong-lensing galaxy clusters that will be taken in parallel with six deep 'blank fields.' The three-year long collaborative program is led by observations from NASA's Great Observatories. The observations allow astronomers to look deeper into the universe than ever before, and potentially uncover galaxies that are as much as 100 times fainter than what the telescopes can typically observe. The Frontier Fields science program is ideal for informing audiences about scientific advances and topics in STEM. The study of galaxy properties, statistics, optics, and Einstein's theory of general relativity naturally leverages off of the science returns of the Frontier Fields program. As a result, the Space Telescope Science Institute's Office of Public Outreach (OPO) has initiated an education and public outreach (EPO) project to follow the progress of the Frontier Fields.For over two decades, the Hubble EPO program has sought to bring the wonders of the universe to the education community, the youth, and the public, and engage audiences in the adventure of scientific discovery. Program components include standards-based curriculum-support materials, exhibits and exhibit components, professional development workshops, and direct interactions with scientists. We are also leveraging our new social media strategy to bring the science program to the public in the form of an ongoing blog. The main underpinnings of the program's infrastructure are scientist-educator development teams, partnerships, and an embedded program evaluation component. OPO is leveraging this existing infrastructure to bring the Frontier Fields science program to the education community and the public in a cost-effective way.The Frontier Fields program has just completed its first year. This talk will feature the goals and current status of the Frontier Fields EPO program. We will highlight OPO's strategies and infrastructure

  15. R-U policy frontiers for health data de-identification

    Science.gov (United States)

    Heatherly, Raymond; Ding, Xiaofeng; Li, Jiuyong; Malin, Bradley A

    2015-01-01

    Objective The Health Insurance Portability and Accountability Act Privacy Rule enables healthcare organizations to share de-identified data via two routes. They can either 1) show re-identification risk is small (e.g., via a formal model, such as k-anonymity) with respect to an anticipated recipient or 2) apply a rule-based policy (i.e., Safe Harbor) that enumerates attributes to be altered (e.g., dates to years). The latter is often invoked because it is interpretable, but it fails to tailor protections to the capabilities of the recipient. The paper shows rule-based policies can be mapped to a utility (U) and re-identification risk (R) space, which can be searched for a collection, or frontier, of policies that systematically trade off between these goals. Methods We extend an algorithm to efficiently compose an R-U frontier using a lattice of policy options. Risk is proportional to the number of patients to which a record corresponds, while utility is proportional to similarity of the original and de-identified distribution. We allow our method to search 20 000 rule-based policies (out of 2700) and compare the resulting frontier with k-anonymous solutions and Safe Harbor using the demographics of 10 U.S. states. Results The results demonstrate the rule-based frontier 1) consists, on average, of 5000 policies, 2% of which enable better utility with less risk than Safe Harbor and 2) the policies cover a broader spectrum of utility and risk than k-anonymity frontiers. Conclusions R-U frontiers of de-identification policies can be discovered efficiently, allowing healthcare organizations to tailor protections to anticipated needs and trustworthiness of recipients. PMID:25911674

  16. Bayesian nonparametric areal wombling for small-scale maps with an application to urinary bladder cancer data from Connecticut.

    Science.gov (United States)

    Guhaniyogi, Rajarshi

    2017-11-10

    With increasingly abundant spatial data in the form of case counts or rates combined over areal regions (eg, ZIP codes, census tracts, or counties), interest turns to formal identification of difference "boundaries," or barriers on the map, in addition to the estimated statistical map itself. "Boundary" refers to a border that describes vastly disparate outcomes in the adjacent areal units, perhaps caused by latent risk factors. This article focuses on developing a model-based statistical tool, equipped to identify difference boundaries in maps with a small number of areal units, also referred to as small-scale maps. This article proposes a novel and robust nonparametric boundary detection rule based on nonparametric Dirichlet processes, later referred to as Dirichlet process wombling (DPW) rule, by employing Dirichlet process-based mixture models for small-scale maps. Unlike the recently proposed nonparametric boundary detection rules based on false discovery rates, the DPW rule is free of ad hoc parameters, computationally simple, and readily implementable in freely available software for public health practitioners such as JAGS and OpenBUGS and yet provides statistically interpretable boundary detection in small-scale wombling. We offer a detailed simulation study and an application of our proposed approach to a urinary bladder cancer incidence rates dataset between 1990 and 2012 in the 8 counties in Connecticut. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Estimating technical efficiency in the hospital sector with panel data: a comparison of parametric and non-parametric techniques.

    Science.gov (United States)

    Siciliani, Luigi

    2006-01-01

    Policy makers are increasingly interested in developing performance indicators that measure hospital efficiency. These indicators may give the purchasers of health services an additional regulatory tool to contain health expenditure. Using panel data, this study compares different parametric (econometric) and non-parametric (linear programming) techniques for the measurement of a hospital's technical efficiency. This comparison was made using a sample of 17 Italian hospitals in the years 1996-9. Highest correlations are found in the efficiency scores between the non-parametric data envelopment analysis under the constant returns to scale assumption (DEA-CRS) and several parametric models. Correlation reduces markedly when using more flexible non-parametric specifications such as data envelopment analysis under the variable returns to scale assumption (DEA-VRS) and the free disposal hull (FDH) model. Correlation also generally reduces when moving from one output to two-output specifications. This analysis suggests that there is scope for developing performance indicators at hospital level using panel data, but it is important that extensive sensitivity analysis is carried out if purchasers wish to make use of these indicators in practice.

  18. CADDIS Volume 4. Data Analysis: PECBO Appendix - R Scripts for Non-Parametric Regressions

    Science.gov (United States)

    Script for computing nonparametric regression analysis. Overview of using scripts to infer environmental conditions from biological observations, statistically estimating species-environment relationships, statistical scripts.

  19. Frontiers in particle science and technology

    International Nuclear Information System (INIS)

    Goddard, D.T.; Lawson, S.; Williams, R.A.

    2002-07-01

    The study of particulate materials and interfaces is a dominant discipline within chemical, pharmaceutical, biological, mineral, energy, consumer and healthcare products sectors. The role is set to expand with advances in engineered particulates, nanoscience and innovations in materials science and processing. This book addresses some key issues in these new frontiers for the research and industrial community. Such issues will continue to impact the quality of our everyday lives

  20. Efficient Frontier - Comparing Different Volatility Estimators

    OpenAIRE

    Tea Poklepović; Zdravka Aljinović; Mario Matković

    2015-01-01

    Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that ra...

  1. Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R

    Directory of Open Access Journals (Sweden)

    Terrance D. Savitsky

    2016-08-01

    Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot

  2. Spurious Seasonality Detection: A Non-Parametric Test Proposal

    Directory of Open Access Journals (Sweden)

    Aurelio F. Bariviera

    2018-01-01

    Full Text Available This paper offers a general and comprehensive definition of the day-of-the-week effect. Using symbolic dynamics, we develop a unique test based on ordinal patterns in order to detect it. This test uncovers the fact that the so-called “day-of-the-week” effect is partly an artifact of the hidden correlation structure of the data. We present simulations based on artificial time series as well. While time series generated with long memory are prone to exhibit daily seasonality, pure white noise signals exhibit no pattern preference. Since ours is a non-parametric test, it requires no assumptions about the distribution of returns, so that it could be a practical alternative to conventional econometric tests. We also made an exhaustive application of the here-proposed technique to 83 stock indexes around the world. Finally, the paper highlights the relevance of symbolic analysis in economic time series studies.

  3. Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.

    Science.gov (United States)

    García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G

    2017-08-01

    The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.

  4. Debt and growth: A non-parametric approach

    Science.gov (United States)

    Brida, Juan Gabriel; Gómez, David Matesanz; Seijas, Maria Nela

    2017-11-01

    In this study, we explore the dynamic relationship between public debt and economic growth by using a non-parametric approach based on data symbolization and clustering methods. The study uses annual data of general government consolidated gross debt-to-GDP ratio and gross domestic product for sixteen countries between 1977 and 2015. Using symbolic sequences, we introduce a notion of distance between the dynamical paths of different countries. Then, a Minimal Spanning Tree and a Hierarchical Tree are constructed from time series to help detecting the existence of groups of countries sharing similar economic performance. The main finding of the study appears for the period 2008-2016 when several countries surpassed the 90% debt-to-GDP threshold. During this period, three groups (clubs) of countries are obtained: high, mid and low indebted countries, suggesting that the employed debt-to-GDP threshold drives economic dynamics for the selected countries.

  5. Exact nonparametric confidence bands for the survivor function.

    Science.gov (United States)

    Matthews, David

    2013-10-12

    A method to produce exact simultaneous confidence bands for the empirical cumulative distribution function that was first described by Owen, and subsequently corrected by Jager and Wellner, is the starting point for deriving exact nonparametric confidence bands for the survivor function of any positive random variable. We invert a nonparametric likelihood test of uniformity, constructed from the Kaplan-Meier estimator of the survivor function, to obtain simultaneous lower and upper bands for the function of interest with specified global confidence level. The method involves calculating a null distribution and associated critical value for each observed sample configuration. However, Noe recursions and the Van Wijngaarden-Decker-Brent root-finding algorithm provide the necessary tools for efficient computation of these exact bounds. Various aspects of the effect of right censoring on these exact bands are investigated, using as illustrations two observational studies of survival experience among non-Hodgkin's lymphoma patients and a much larger group of subjects with advanced lung cancer enrolled in trials within the North Central Cancer Treatment Group. Monte Carlo simulations confirm the merits of the proposed method of deriving simultaneous interval estimates of the survivor function across the entire range of the observed sample. This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. It was begun while the author was visiting the Department of Statistics, University of Auckland, and completed during a subsequent sojourn at the Medical Research Council Biostatistics Unit in Cambridge. The support of both institutions, in addition to that of NSERC and the University of Waterloo, is greatly appreciated.

  6. Hyperspectral image segmentation using a cooperative nonparametric approach

    Science.gov (United States)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  7. On the use of permutation in and the performance of a class of nonparametric methods to detect differential gene expression.

    Science.gov (United States)

    Pan, Wei

    2003-07-22

    Recently a class of nonparametric statistical methods, including the empirical Bayes (EB) method, the significance analysis of microarray (SAM) method and the mixture model method (MMM), have been proposed to detect differential gene expression for replicated microarray experiments conducted under two conditions. All the methods depend on constructing a test statistic Z and a so-called null statistic z. The null statistic z is used to provide some reference distribution for Z such that statistical inference can be accomplished. A common way of constructing z is to apply Z to randomly permuted data. Here we point our that the distribution of z may not approximate the null distribution of Z well, leading to possibly too conservative inference. This observation may apply to other permutation-based nonparametric methods. We propose a new method of constructing a null statistic that aims to estimate the null distribution of a test statistic directly. Using simulated data and real data, we assess and compare the performance of the existing method and our new method when applied in EB, SAM and MMM. Some interesting findings on operating characteristics of EB, SAM and MMM are also reported. Finally, by combining the idea of SAM and MMM, we outline a simple nonparametric method based on the direct use of a test statistic and a null statistic.

  8. Modelación de episodios críticos de contaminación por material particulado (PM10 en Santiago de Chile: Comparación de la eficiencia predictiva de los modelos paramétricos y no paramétricos Modeling critical episodes of air pollution by PM10 in Santiago, Chile: Comparison of the predictive efficiency of parametric and non-parametric statistical models

    Directory of Open Access Journals (Sweden)

    Sergio A. Alvarado

    2010-12-01

    Full Text Available Objetivo: Evaluar la eficiencia predictiva de modelos estadísticos paramétricos y no paramétricos para predecir episodios críticos de contaminación por material particulado PM10 del día siguiente, que superen en Santiago de Chile la norma de calidad diaria. Una predicción adecuada de tales episodios permite a la autoridad decretar medidas restrictivas que aminoren la gravedad del episodio, y consecuentemente proteger la salud de la comunidad. Método: Se trabajó con las concentraciones de material particulado PM10 registradas en una estación asociada a la red de monitorización de la calidad del aire MACAM-2, considerando 152 observaciones diarias de 14 variables, y con información meteorológica registrada durante los años 2001 a 2004. Se ajustaron modelos estadísticos paramétricos Gamma usando el paquete estadístico STATA v11, y no paramétricos usando una demo del software estadístico MARS v 2.0 distribuida por Salford-Systems. Resultados: Ambos métodos de modelación presentan una alta correlación entre los valores observados y los predichos. Los modelos Gamma presentan mejores aciertos que MARS para las concentraciones de PM10 con valores Objective: To evaluate the predictive efficiency of two statistical models (one parametric and the other non-parametric to predict critical episodes of air pollution exceeding daily air quality standards in Santiago, Chile by using the next day PM10 maximum 24h value. Accurate prediction of such episodes would allow restrictive measures to be applied by health authorities to reduce their seriousness and protect the community´s health. Methods: We used the PM10 concentrations registered by a station of the Air Quality Monitoring Network (152 daily observations of 14 variables and meteorological information gathered from 2001 to 2004. To construct predictive models, we fitted a parametric Gamma model using STATA v11 software and a non-parametric MARS model by using a demo version of Salford

  9. The Hubble Frontier Fields: Engaging Multiple Audiences in Exploring the Cosmic Frontier

    Science.gov (United States)

    Lawton, Brandon L.; Smith, Denise A.; Summers, Frank; Ryer, Holly; Slivinski, Carolyn; Lotz, Jennifer M.

    2017-06-01

    The Hubble Frontier Fields is a multi-cycle program of six deep-field observations of strong-lensing galaxy clusters taken in parallel with six deep “blank fields.” The three-year long collaborative program began in late 2013 and is led by observations from NASA’s Great Observatories. The observations, now complete, allow astronomers to look deeper into the universe than ever before, and potentially uncover galaxies that are as much as 100 times fainter than what the telescopes can typically observe. The Frontier Fields science program is ideal for informing audiences about scientific advances and topics in STEM. The study of galaxy properties, statistics, optics, and Einstein’s theory of general relativity naturally leverages off of the science returns of the Frontier Fields program. As a result, the Space Telescope Science Institute’s Office of Public Outreach (OPO) has engaged multiple audiences over the past three years to follow the progress of the Frontier Fields.For over two decades, the STScI outreach program has sought to bring the wonders of the universe to the public and engage audiences in the adventure of scientific discovery. In addition, we are leveraging the reach of the new NASA’s Universe of Learning education program to bring the science of the Frontier Fields to informal education audiences. The main underpinnings of the STScI outreach program and the Universe of Learning education program are scientist-educator development teams, partnerships, and an embedded program evaluation component. OPO is leveraging the infrastructure of these education and outreach programs to bring the Frontier Fields science program to the education community and the public in a cost-effective way.This talk will feature highlights over the past three years of the program. We will highlight OPO’s strategies and infrastructure that allows for the quick delivery of groundbreaking science to the education community and public.

  10. Nonparametric identification of nonlinear dynamic systems using a synchronisation-based method

    Science.gov (United States)

    Kenderi, Gábor; Fidlin, Alexander

    2014-12-01

    The present study proposes an identification method for highly nonlinear mechanical systems that does not require a priori knowledge of the underlying nonlinearities to reconstruct arbitrary restoring force surfaces between degrees of freedom. This approach is based on the master-slave synchronisation between a dynamic model of the system as the slave and the real system as the master using measurements of the latter. As the model synchronises to the measurements, it becomes an observer of the real system. The optimal observer algorithm in a least-squares sense is given by the Kalman filter. Using the well-known state augmentation technique, the Kalman filter can be turned into a dual state and parameter estimator to identify parameters of a priori characterised nonlinearities. The paper proposes an extension of this technique towards nonparametric identification. A general system model is introduced by describing the restoring forces as bilateral spring-dampers with time-variant coefficients, which are estimated as augmented states. The estimation procedure is followed by an a posteriori statistical analysis to reconstruct noise-free restoring force characteristics using the estimated states and their estimated variances. Observability is provided using only one measured mechanical quantity per degree of freedom, which makes this approach less demanding in the number of necessary measurement signals compared with truly nonparametric solutions, which typically require displacement, velocity and acceleration signals. Additionally, due to the statistical rigour of the procedure, it successfully addresses signals corrupted by significant measurement noise. In the present paper, the method is described in detail, which is followed by numerical examples of one degree of freedom (1DoF) and 2DoF mechanical systems with strong nonlinearities of vibro-impact type to demonstrate the effectiveness of the proposed technique.

  11. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  12. Investigating the complex relationship between in situ Southern Ocean pCO2 and its ocean physics and biogeochemical drivers using a nonparametric regression approach

    CSIR Research Space (South Africa)

    Pretorius, W

    2014-01-01

    Full Text Available the relationship more accurately in terms of MSE, RMSE and MAE, than a standard parametric approach (multiple linear regression). These results provide a platform for using the developed nonparametric regression model based on in situ measurements to predict p...

  13. A Note on the Kinks at the Mean Variance Frontier

    OpenAIRE

    Vörös, J.; Kriens, J.; Strijbosch, L.W.G.

    1997-01-01

    In this paper the standard portfolio case with short sales restrictions is analyzed.Dybvig pointed out that if there is a kink at a risky portfolio on the efficient frontier, then the securities in this portfolio have equal expected return and the converse of this statement is false.For the existence of kinks at the efficient frontier the sufficient condition is given here and a new procedure is used to derive the efficient frontier, i.e. the characteristics of the mean variance frontier.

  14. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab; Huang, Jianhua Z.

    2012-01-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

  15. A panel data parametric frontier technique for measuring total-factor energy efficiency: An application to Japanese regions

    International Nuclear Information System (INIS)

    Honma, Satoshi; Hu, Jin-Li

    2014-01-01

    Using the stochastic frontier analysis model, we estimate TFEE (total-factor energy efficiency) scores for 47 regions across Japan during the years 1996–2008. We extend the cross-sectional stochastic frontier model proposed by Zhou et al. (2012) to panel data models and add environmental variables. The results provide not only the TFEE scores, in which statistical noise is taken into account, but also the determinants of inefficiency. The three stochastic TFEE scores are compared with a TFEE score derived using data envelopment analysis. The four TFEE scores are highly correlated with one another. For the inefficiency estimates, higher manufacturing industry shares and wholesale and retail trade shares correspond to lower TFEE scores. - Highlights: • This study estimates total-factor energy efficiency of Japanese regions using the stochastic frontier analysis model. • Determinants of inefficiency are also estimated. • The higher the manufacturing share and wholesale and retail trade share, the lower the energy efficiency

  16. The low-energy frontier of particle physics

    International Nuclear Information System (INIS)

    Jaeckel, Joerg

    2010-02-01

    Most embeddings of the Standard Model into a more unified theory, in particular the ones based on supergravity or superstrings, predict the existence of a hidden sector of particles which have only very weak interactions with the visible sector Standard Model particles. Some of these exotic particle candidates (such as e.g. ''axions'', ''axion-like particles'' and ''hidden U(1) gauge bosons'') may be very light, with masses in the sub-eV range, and have very weak interactions with photons. Correspondingly, these very weakly interacting sub-eV particles (WISPs) may lead to observable effects in experiments (as well as in astrophysical and cosmological observations) searching for light shining through a wall, for changes in laser polarisation, for non-linear processes in large electromagnetic fields and for deviations from Coulomb's law. We present the physics case and a status report of this emerging low-energy frontier of fundamental physics. (orig.)

  17. Leadership That Settled the Frontier

    Science.gov (United States)

    Boyd, Barry L.

    2014-01-01

    This idea brief explores the leadership lessons displayed by the characters of Louis L'Amour's western novels. Western fiction can be a powerful tool to engage students and demonstrate many leadership theories and models. This brief examines how L'Amour's characters can be used to illustrate Kouzes and Posner's Five Practices of Exemplary Leaders.…

  18. Report in the Energy and Intensity Frontiers, and Theoretical at Northwestern University

    Energy Technology Data Exchange (ETDEWEB)

    Velasco, Mayda [Northwestern Univ., Evanston, IL (United States); Schmitt, Michael [Northwestern Univ., Evanston, IL (United States); deGouvea, Andre [Northwestern Univ., Evanston, IL (United States); Low, Ian [Northwestern Univ., Evanston, IL (United States); Petriello, Frank [Northwestern Univ., Evanston, IL (United States); Schellman, Heidi [Northwestern Univ., Evanston, IL (United States)

    2016-03-31

    The Northwestern (NU) Particle Physics (PP) group involved in this report is active on all the following priority areas: Energy and Intensity Frontiers. The group is lead by 2 full profs. in experimental physics (Schmitt and Velasco), 3 full profs. in theoretical physics (de Gouvea, Low and Petriello), and Heidi Schellman who is now at Oregon State. Low and Petriello hold joint appointments with the HEP Division at Argonne National Laboratory. The theoretical PP research focuses on different aspects of PP phenomenology. de Gouvea dedicates a large fraction of his research efforts to understanding the origin of neutrino masses, neutrino properties and uncovering other new phenomena, and investigating connections between neutrino physics and other aspects of PP. Low works on Higgs physics as well as new theories beyond the Standard Model. Petriello pursues a research program in precision QCD and its associated collider phenomenology. The main goal of this effort is to improve the Standard Model predictions for important LHC observables in order to enable discoveries of new physics. In recent years, the emphasis on experimental PP at NU has been in collider physics. NU expands its efforts in new directions in both the Intensity and the Cosmic Frontiers (not discussed in this report). In the Intensity Frontier, Schmitt has started a new effort on Mu2e. He was accepted as a collaborator in April 2015 and is identified with important projects. In the Energy Frontier, Hahn, Schmitt and Velasco continue to have a significant impact and expanded their CMS program to include R&D for the real-time L1 tracking trigger and the high granularity calorimeter needed for the high-luminosity LHC. Hahn is supported by an independent DOE Career Award and his work will not be discussed in this document. The NU analysis effort includes searches for rare and forbidden decays of the Higgs bosons, Z boson, top quark, dark matter and other physics beyond the standard model topics. Four

  19. Semi-nonparametric estimates of interfuel substitution in US energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Serletis, A.; Shahmoradi, A. [University of Calgary, Calgary, AB (Canada). Dept. of Economics

    2008-09-15

    This paper focuses on the demand for crude oil, natural gas, and coal in the United States in the context of two globally flexible functional forms - the Fourier and the Asymptotically Ideal Model (AIM) - estimated subject to full regularity, using methods suggested over 20 years ago by Gallant and Golub (Gallant, A. Ronald and Golub, Gene H. Imposing Curvature Restrictions on Flexible Functional Forms. Journal of Econometrics 26 (1984), 295-321) and recently used by Serletis and Shahmoradi (Serletis, A., Shahmoradi, A., 2005. Semi-nonparametric estimates of the demand for money in the United States. Macroeconomic Dynamics 9, 542-559) in the monetary demand systems literature. We provide a comparison in terms of a full set of elasticities and also a policy perspective, using (for the first time) parameter estimates that are consistent with global regularity.

  20. Semi-nonparametric estimates of interfuel substitution in U.S. energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada); Shahmoradi, Asghar [Faculty of Economics, University of Tehran, Tehran (Iran)

    2008-09-15

    This paper focuses on the demand for crude oil, natural gas, and coal in the United States in the context of two globally flexible functional forms - the Fourier and the Asymptotically Ideal Model (AIM) - estimated subject to full regularity, using methods suggested over 20 years ago by Gallant and Golub [Gallant, A. Ronald and Golub, Gene H. Imposing Curvature Restrictions on Flexible Functional Forms. Journal of Econometrics 26 (1984), 295-321] and recently used by Serletis and Shahmoradi [Serletis, A., Shahmoradi, A., 2005. Semi-nonparametric estimates of the demand for money in the United States. Macroeconomic Dynamics 9, 542-559] in the monetary demand systems literature. We provide a comparison in terms of a full set of elasticities and also a policy perspective, using (for the first time) parameter estimates that are consistent with global regularity. (author)

  1. Assessing T cell clonal size distribution: a non-parametric approach.

    Directory of Open Access Journals (Sweden)

    Olesya V Bolkhovskaya

    Full Text Available Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  2. Assessing T cell clonal size distribution: a non-parametric approach.

    Science.gov (United States)

    Bolkhovskaya, Olesya V; Zorin, Daniil Yu; Ivanchenko, Mikhail V

    2014-01-01

    Clonal structure of the human peripheral T-cell repertoire is shaped by a number of homeostatic mechanisms, including antigen presentation, cytokine and cell regulation. Its accurate tuning leads to a remarkable ability to combat pathogens in all their variety, while systemic failures may lead to severe consequences like autoimmune diseases. Here we develop and make use of a non-parametric statistical approach to assess T cell clonal size distributions from recent next generation sequencing data. For 41 healthy individuals and a patient with ankylosing spondylitis, who undergone treatment, we invariably find power law scaling over several decades and for the first time calculate quantitatively meaningful values of decay exponent. It has proved to be much the same among healthy donors, significantly different for an autoimmune patient before the therapy, and converging towards a typical value afterwards. We discuss implications of the findings for theoretical understanding and mathematical modeling of adaptive immunity.

  3. Nonparametric estimation of age-specific reference percentile curves with radial smoothing.

    Science.gov (United States)

    Wan, Xiaohai; Qu, Yongming; Huang, Yao; Zhang, Xiao; Song, Hanping; Jiang, Honghua

    2012-01-01

    Reference percentile curves represent the covariate-dependent distribution of a quantitative measurement and are often used to summarize and monitor dynamic processes such as human growth. We propose a new nonparametric method based on a radial smoothing (RS) technique to estimate age-specific reference percentile curves assuming the underlying distribution is relatively close to normal. We compared the RS method with both the LMS and the generalized additive models for location, scale and shape (GAMLSS) methods using simulated data and found that our method has smaller estimation error than the two existing methods. We also applied the new method to analyze height growth data from children being followed in a clinical observational study of growth hormone treatment, and compared the growth curves between those with growth disorders and the general population. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    Science.gov (United States)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  5. Nonparametric Estimation of Cumulative Incidence Functions for Competing Risks Data with Missing Cause of Failure

    DEFF Research Database (Denmark)

    Effraimidis, Georgios; Dahl, Christian Møller

    In this paper, we develop a fully nonparametric approach for the estimation of the cumulative incidence function with Missing At Random right-censored competing risks data. We obtain results on the pointwise asymptotic normality as well as the uniform convergence rate of the proposed nonparametric...

  6. Non-parametric tests of productive efficiency with errors-in-variables

    NARCIS (Netherlands)

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  7. Frontiers in neutrino physics - Transparencies

    International Nuclear Information System (INIS)

    Akhmedov, E.; Balantekin, B.; Conrad, J.; Engel, J.; Fogli, G.; Giunti, C.; Espinoza, C.; Lasserre, T.; Lazauskas, R.; Lhuiller, D.; Lindner, M.; Martinez-Pinedo, G.; Martini, M.; McLaughlin, G.; Mirizzi, A.; Pehlivan, Y.; Petcov, S.; Qian, Y.; Serenelli, A.; Stancu, I.; Surman, R.; Vaananen, D.; Vissani, F.; Vogel, P.

    2012-01-01

    This document gathers the slides of the presentations. The purpose of the conference was to discuss the last advances in neutrino physics. The presentations dealt with: -) the measurement of the neutrino velocity, -) neutrino oscillations, -) anomaly in solar models and neutrinos, -) double beta decay, -) self refraction of neutrinos, -) cosmic neutrinos, -) antineutrino spectra from reactors, and -) some aspects of neutrino physics with radioactive ion beams. (A.C.)

  8. Reference models and incentive regulation of electricity distribution networks: An evaluation of Sweden's Network Performance Assessment Model (NPAM)

    International Nuclear Information System (INIS)

    Jamasb, Tooraj; Pollitt, Michael

    2008-01-01

    Electricity sector reforms across the world have led to a search for innovative approaches to regulation that promote efficiency in the natural monopoly distribution networks and reduce their service charges. To this aim, a number of countries have adopted incentive regulation models based on efficiency benchmarking. While most regulators have used parametric and non-parametric frontier-based methods of benchmarking some have adopted engineering-designed 'reference firm' or 'norm' models. This paper examines the incentive properties and related aspects of the reference firm model-NPAM-as used in Sweden and compares this with frontier-based benchmarking methods. We identify a number of important differences between the two approaches that are not readily apparent and discuss their ramifications for the regulatory objectives and process. We conclude that, on balance, the reference models are less appropriate as benchmarks than real firms. Also, the implementation framework based on annual ex-post reviews exacerbates the regulatory problems mainly by increasing uncertainty and reducing the incentive for innovation

  9. [The Probabilistic Efficiency Frontier: A Value Assessment of Treatment Options in Hepatitis C].

    Science.gov (United States)

    Mühlbacher, Axel C; Sadler, Andrew

    2017-06-19

    Background The German Institute for Quality and Efficiency in Health Care (IQWiG) recommends the concept of the efficiency frontier to assess health care interventions. The efficiency frontier supports regulatory decisions on reimbursement prices for the appropriate allocation of health care resources. Until today this cost-benefit assessment framework has only been applied on the basis of individual patient-relevant endpoints. This contradicts the reality of a multi-dimensional patient benefit. Objective The objective of this study was to illustrate the operationalization of multi-dimensional benefit considering the uncertainty in clinical effects and preference data in order to calculate the efficiency of different treatment options for hepatitis C (HCV). This case study shows how methodological challenges could be overcome in order to use the efficiency frontier for economic analysis and health care decision-making. Method The operationalization of patient benefit was carried out on several patient-relevant endpoints. Preference data from a discrete choice experiment (DCE) study and clinical data based on clinical trials, which reflected the patient and the clinical perspective, respectively, were used for the aggregation of an overall benefit score. A probabilistic efficiency frontier was constructed in a Monte Carlo simulation with 10000 random draws. Patient-relevant endpoints were modeled with a beta distribution and preference data with a normal distribution. The assessment of overall benefit and costs provided information about the adequacy of the treatment prices. The parameter uncertainty was illustrated by the price-acceptability-curve and the net monetary benefit. Results Based on the clinical and preference data in Germany, the interferon-free treatment options proved to be efficient for the current price level. The interferon-free therapies of the latest generation achieved a positive net cost-benefit. Within the decision model, these therapies

  10. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Efficient Discovery of De-identification Policies Through a Risk-Utility Frontier.

    Science.gov (United States)

    Xia, Weiyi; Heatherly, Raymond; Ding, Xiaofeng; Li, Jiuyong; Malin, Bradley

    2013-01-01

    Modern information technologies enable organizations to capture large quantities of person-specific data while providing routine services. Many organizations hope, or are legally required, to share such data for secondary purposes (e.g., validation of research findings) in a de-identified manner. In previous work, it was shown de-identification policy alternatives could be modeled on a lattice, which could be searched for policies that met a prespecified risk threshold (e.g., likelihood of re-identification). However, the search was limited in several ways. First, its definition of utility was syntactic - based on the level of the lattice - and not semantic - based on the actual changes induced in the resulting data. Second, the threshold may not be known in advance. The goal of this work is to build the optimal set of policies that trade-off between privacy risk (R) and utility (U), which we refer to as a R-U frontier. To model this problem, we introduce a semantic definition of utility, based on information theory, that is compatible with the lattice representation of policies. To solve the problem, we initially build a set of policies that define a frontier. We then use a probability-guided heuristic to search the lattice for policies likely to update the frontier. To demonstrate the effectiveness of our approach, we perform an empirical analysis with the Adult dataset of the UCI Machine Learning Repository. We show that our approach can construct a frontier closer to optimal than competitive approaches by searching a smaller number of policies. In addition, we show that a frequently followed de-identification policy (i.e., the Safe Harbor standard of the HIPAA Privacy Rule) is suboptimal in comparison to the frontier discovered by our approach.

  12. The Frontiers of Additive Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-03

    Additive manufacturing, more commonly known as 3-D printing, has become a ubiquitous tool in science for its precise control over mechanical design. For additive manufacturing to work, a 3-D structure is split into thin 2D slices, and then different physical properties, such as photo-polymerization or melting, are used to grow the sequential layers. The level of control allows not only for devices to be made with a variety of materials: e.g. plastics, metals, and quantum dots, but to also have finely controlled structures leading to other novel properties. While 3-D printing is widely used by hobbyists for making models, it also has industrial applications in structural engineering, biological tissue scaffolding, customized electric circuitry, fuel cells, security, and more.

  13. Assessing systematic risk in the S&P500 index between 2000 and 2011: A Bayesian nonparametric approach

    OpenAIRE

    Rodríguez, Abel; Wang, Ziwei; Kottas, Athanasios

    2017-01-01

    We develop a Bayesian nonparametric model to assess the effect of systematic risks on multiple financial markets, and apply it to understand the behavior of the S&P500 sector indexes between January 1, 2000 and December 31, 2011. More than prediction, our main goal is to understand the evolution of systematic and idiosyncratic risks in the U.S. economy over this particular time period, leading to novel sector-specific risk indexes. To accomplish this goal, we model the appearance of extreme l...

  14. Annual symposium on Frontiers in Science

    Energy Technology Data Exchange (ETDEWEB)

    Metzger, N.; Fulton, K.R.

    1998-12-31

    This final report summarizes activities conducted for the National Academy of Sciences' Annual Symposium on Frontiers of Science with support from the US Department of Energy for the period July 1, 1993 through May 31, 1998. During the report period, five Frontiers of Science symposia were held at the Arnold and Mabel Beckman Center of the National Academies of Sciences and Engineering. For each Symposium, an organizing committee appointed by the NAS President selected and planned the eight sessions for the Symposium and identified general participants for invitation by the NAS President. These Symposia accomplished their goal of bringing together outstanding younger (age 45 or less) scientists to hear presentations in disciplines outside their own and to discuss exciting advances and opportunities in their fields in a format that encourages, and allows adequate time for, informal one-on-one discussions among participants. Of the 458 younger scientists who participated, over a quarter (124) were women. Participant lists for all symposia (1993--1997) are attached. The scientific participants were leaders in basic research from academic, industrial, and federal laboratories in such disciplines as astronomy, astrophysics, atmospheric science, biochemistry, cell biology, chemistry, computer science, earth sciences, engineering, genetics, material sciences, mathematics, microbiology, neuroscience, physics, and physiology. For each symposia, the 24 speakers and discussants on the program were urged to focus their presentations on current cutting-edge research in their field for a scientifically sophisticated but non-specialist audience, and to provide a sense of the experimental data--what is actually measured and seen in the various fields. They were also asked to address questions such as: What are the major research problems and unique tools in their field? What are the current limitations on advances as well as the frontiers? Speakers were asked to provide a

  15. CERN and the high energy frontier

    Directory of Open Access Journals (Sweden)

    Tsesmelis Emmanuel

    2014-04-01

    Full Text Available This paper presents the particle physics programme at CERN at the high-energy frontier. Starting from the key open questions in particle physics and the large-scale science facilities existing at CERN, concentrating on the Large Hadron Collider(LHC, this paper goes on to present future possibilities for global projects in high energy physics. The paper presents options for future colliders, all being within the framework of the recently updated European Strategy for Particle Physics, and all of which have a unique value to add to experimental particle physics. The paper concludes by outlining key messages for the way forward for high-energy physics research.

  16. From "Frontiers of Astronomy" to Astrobiology

    Science.gov (United States)

    Kwok, Sun

    2011-10-01

    In his book Frontiers of Astronomy, Fred Hoyle outlined a number of ideas on the stellar synthesis of solid-state materials and their ejection into the interstellar medium. He also considered the possibility of interstellar organics being integrated into the early Earth during the accretion phase of planetary formation. These organics may have played a role in the origin of life and the creation of fossil fuels. In this paper, we assess these ideas with modern observational evidence, in particular on the evidence of stellar synthesis of complex organics and their delivery to the early Solar System.

  17. Frontiers in biomedical engineering and biotechnology.

    Science.gov (United States)

    Liu, Feng; Goodarzi, Ali; Wang, Haifeng; Stasiak, Joanna; Sun, Jianbo; Zhou, Yu

    2014-01-01

    The 2nd International Conference on Biomedical Engineering and Biotechnology (iCBEB 2013), held in Wuhan on 11–13 October 2013, is an annual conference that aims at providing an opportunity for international and national researchers and practitioners to present the most recent advances and future challenges in the fields of Biomedical Information, Biomedical Engineering and Biotechnology. The papers published by this issue are selected from this conference, which witnesses the frontier in the field of Biomedical Engineering and Biotechnology, which particularly has helped improving the level of clinical diagnosis in medical work.

  18. The Research Frontier in Corporate Governance

    DEFF Research Database (Denmark)

    Ahrens, Thomas; Filatotchev, Igor; Thomsen, Steen

    2011-01-01

    in our knowledge of corporate governance and is likely to lead of a rethink of central concepts like shareholder value, debt governance, and management incentives (2) what do we know and what do we need to how about the impact of national institutions on corporate governance? (3) What research questions......In this paper we attempt to identify the research frontier in corporate governance using three different approaches: (1) what challenges does the financial crisis 2007–2009 pose for corporate governance research? We show that the financial crisis is a huge natural experiment which has exposed gaps...

  19. Energy technology sources, systems and frontier conversion

    CERN Document Server

    Ohta, Tokio

    1994-01-01

    This book provides a concise and technical overview of energy technology: the sources of energy, energy systems and frontier conversion. As well as serving as a basic reference book for professional scientists and students of energy, it is intended for scientists and policy makers in other disciplines (including practising engineers, biologists, physicists, economists and managers in energy related industries) who need an up-to-date and authoritative guide to the field of energy technology.Energy systems and their elemental technologies are introduced and evaluated from the view point

  20. Performance of non-parametric algorithms for spatial mapping of tropical forest structure

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2016-08-01

    Full Text Available Abstract Background Mapping tropical forest structure is a critical requirement for accurate estimation of emissions and removals from land use activities. With the availability of a wide range of remote sensing imagery of vegetation characteristics from space, development of finer resolution and more accurate maps has advanced in recent years. However, the mapping accuracy relies heavily on the quality of input layers, the algorithm chosen, and the size and quality of inventory samples for calibration and validation. Results By using airborne lidar data as the “truth” and focusing on the mean canopy height (MCH as a key structural parameter, we test two commonly-used non-parametric techniques of maximum entropy (ME and random forest (RF for developing maps over a study site in Central Gabon. Results of mapping show that both approaches have improved accuracy with more input layers in mapping canopy height at 100 m (1-ha pixels. The bias-corrected spatial models further improve estimates for small and large trees across the tails of height distributions with a trade-off in increasing overall mean squared error that can be readily compensated by increasing the sample size. Conclusions A significant improvement in tropical forest mapping can be achieved by weighting the number of inventory samples against the choice of image layers and the non-parametric algorithms. Without future satellite observations with better sensitivity to forest biomass, the maps based on existing data will remain slightly biased towards the mean of the distribution and under and over estimating the upper and lower tails of the distribution.